././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1743283853.8097856 tifffile-2025.3.30/0000777000000000000000000000000014772063216010560 5ustar00././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1562777007.0 tifffile-2025.3.30/ACKNOWLEDGEMENTS.rst0000666000000000000000000000050413511412657013640 0ustar00Acknowledgements ---------------- * Egor Zindy, for lsm_scan_info specifics. * Wim Lewis for a bug fix and some LSM functions. * Hadrien Mary for help on reading MicroManager files. * Christian Kliche for help writing tiled and color-mapped files. * Grzegorz Bokota, for reporting and fixing OME-XML handling issues. ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1743283852.0 tifffile-2025.3.30/CHANGES.rst0000666000000000000000000011756014772063214012372 0ustar00Revisions --------- 2025.3.30 - Pass 5110 tests. - Fix for imagecodecs 2025.3.30. 2025.3.13 - Change bytes2str to decode only up to first NULL character (breaking). - Remove stripnull function calls to reduce overhead (#285). - Deprecate stripnull function. 2025.2.18 - Fix julian_datetime milliseconds (#283). - Remove deprecated dtype arguments from imread and FileSequence (breaking). - Remove deprecated imsave and TiffWriter.save function/method (breaking). - Remove deprecated option to pass multiple values to compression (breaking). - Remove deprecated option to pass unit to resolution (breaking). - Remove deprecated enums from TIFF namespace (breaking). - Remove deprecated lazyattr and squeeze_axes functions (breaking). 2025.1.10 - Improve type hints. - Deprecate Python 3.10. 2024.12.12 - Read PlaneProperty from STK UIC1Tag (#280). - Allow 'None' as alias for COMPRESSION.NONE and PREDICTOR.NONE (#274). - Zarr 3 is not supported (#272). 2024.9.20 - Fix writing colormap to ImageJ files (breaking). - Improve typing. - Drop support for Python 3.9. 2024.8.30 - Support writing OME Dataset and some StructuredAnnotations elements. 2024.8.28 - Fix LSM scan types and dimension orders (#269, breaking). - Use IO[bytes] instead of BinaryIO for typing (#268). 2024.8.24 - Do not remove trailing length-1 dimension writing non-shaped file (breaking). - Fix writing OME-TIFF with certain modulo axes orders. - Make imshow NaN aware. 2024.8.10 - Relax bitspersample check for JPEG, JPEG2K, and JPEGXL compression (#265). 2024.7.24 - Fix reading contiguous multi-page series via Zarr store (#67). 2024.7.21 - Fix integer overflow in product function caused by numpy types. - Allow tag reader functions to fail. 2024.7.2 - Enable memmap to create empty files with non-native byte order. - Deprecate Python 3.9, support Python 3.13. 2024.6.18 - Ensure TiffPage.nodata is castable to dtype (breaking, #260). - Support Argos AVS slides. 2024.5.22 - Derive TiffPages, TiffPageSeries, FileSequence, StoredShape from Sequence. - Truncate circular IFD chain, do not raise TiffFileError (breaking). - Deprecate access to TiffPages.pages and FileSequence.files. - Enable DeprecationWarning for enums in TIFF namespace. - Remove some deprecated code (breaking). - Add iccprofile property to TiffPage and parameter to TiffWriter.write. - Do not detect VSI as SIS format. - Limit length of logged exception messages. - Fix docstring examples not correctly rendered on GitHub (#254, #255). 2024.5.10 - Support reading JPEGXL compression in DNG 1.7. - Read invalid TIFF created by IDEAS software. 2024.5.3 - Fix reading incompletely written LSM. - Fix reading Philips DP with extra rows of tiles (#253, breaking). 2024.4.24 - Fix compatibility issue with numpy 2 (#252). 2024.4.18 - Fix write_fsspec when last row of tiles is missing in Philips slide (#249). - Add option not to quote file names in write_fsspec. - Allow compressing bilevel images with deflate, LZMA, and Zstd. 2024.2.12 - Deprecate dtype, add chunkdtype parameter in FileSequence.asarray. - Add imreadargs parameters passed to FileSequence.imread. 2024.1.30 - Fix compatibility issue with numpy 2 (#238). - Enable DeprecationWarning for tuple compression argument. - Parse sequence of numbers in xml2dict. 2023.12.9 - Read 32-bit Indica Labs TIFF as float32. - Fix UnboundLocalError reading big LSM files without time axis. - Use os.sched_getaffinity, if available, to get the number of CPUs (#231). - Limit the number of default worker threads to 32. 2023.9.26 - Lazily convert dask array to ndarray when writing. - Allow to specify buffersize for reading and writing. - Fix IndexError reading some corrupted files with ZarrTiffStore (#227). 2023.9.18 - Raise exception when writing non-volume data with volumetric tiles (#225). - Improve multi-threaded writing of compressed multi-page files. - Fix fsspec reference for big-endian files with predictors. 2023.8.30 - Support exclusive file creation mode (#221, #223). 2023.8.25 - Verify shaped metadata is compatible with page shape. - Support out parameter when returning selection from imread (#222). 2023.8.12 - Support decompressing EER frames. - Facilitate filtering logged warnings (#216). - Read more tags from UIC1Tag (#217). - Fix premature closing of files in main (#218). - Don't force matplotlib backend to tkagg in main (#219). - Add py.typed marker. - Drop support for imagecodecs < 2023.3.16. 2023.7.18 - Limit threading via TIFFFILE_NUM_THREADS environment variable (#215). - Remove maxworkers parameter from tiff2fsspec (breaking). 2023.7.10 - Increase default strip size to 256 KB when writing with compression. - Fix ZarrTiffStore with non-default chunkmode. 2023.7.4 - Add option to return selection from imread (#200). - Fix reading OME series with missing trailing frames (#199). - Fix fsspec reference for WebP compressed segments missing alpha channel. - Fix linting issues. - Detect files written by Agilent Technologies. - Drop support for Python 3.8 and numpy < 1.21 (NEP29). 2023.4.12 - Do not write duplicate ImageDescription tags from extratags (breaking). - Support multifocal SVS files (#193). - Log warning when filtering out extratags. - Fix writing OME-TIFF with image description in extratags. - Ignore invalid predictor tag value if prediction is not used. - Raise KeyError if ZarrStore is missing requested chunk. 2023.3.21 - Fix reading MMstack with missing data (#187). 2023.3.15 - Fix corruption using tile generators with prediction/compression (#185). - Add parser for Micro-Manager MMStack series (breaking). - Return micromanager_metadata IndexMap as numpy array (breaking). - Revert optimizations for Micro-Manager OME series. - Do not use numcodecs zstd in write_fsspec (kerchunk issue 317). - More type annotations. 2023.2.28 - Fix reading some Micro-Manager metadata from corrupted files. - Speed up reading Micro-Manager indexmap for creation of OME series. 2023.2.27 - Use Micro-Manager indexmap offsets to create virtual TiffFrames. - Fixes for future imagecodecs. 2023.2.3 - Fix overflow in calculation of databytecounts for large NDPI files. 2023.2.2 - Fix regression reading layered NDPI files. - Add option to specify offset in FileHandle.read_array. 2023.1.23 - Support reading NDTiffStorage. - Support reading PIXTIFF compression. - Support LERC with Zstd or Deflate compression. - Do not write duplicate and select extratags. - Allow to write uncompressed image data beyond 4 GB in classic TIFF. - Add option to specify chunkshape and dtype in FileSequence.asarray. - Add option for imread to write to output in FileSequence.asarray (#172). - Add function to read GDAL structural metadata. - Add function to read NDTiff.index files. - Fix IndexError accessing TiffFile.mdgel_metadata in non-MDGEL files. - Fix unclosed file ResourceWarning in TiffWriter. - Fix non-bool predictor arguments (#167). - Relax detection of OME-XML (#173). - Rename some TiffFrame parameters (breaking). - Deprecate squeeze_axes (will change signature). - Use defusexml in xml2dict. 2022.10.10 - Fix RecursionError in peek_iterator. - Fix reading NDTiffv3 summary settings. - Fix svs_description_metadata parsing (#149). - Fix ImportError if Python was built without zlib or lzma. - Fix bool of COMPRESSION and PREDICTOR instances. - Deprecate non-sequence extrasamples arguments. - Parse SCIFIO metadata as ImageJ. 2022.8.12 - Fix writing ImageJ format with hyperstack argument. - Fix writing description with metadata disabled. - Add option to disable writing shaped metadata in TiffWriter. 2022.8.8 - Fix regression using imread out argument (#147). - Fix imshow show argument. - Support fsspec OpenFile. 2022.8.3 - Fix regression writing default resolutionunit (#145). - Add strptime function parsing common datetime formats. 2022.7.31 - Fix reading corrupted WebP compressed segments missing alpha channel (#122). - Fix regression reading compressed ImageJ files. 2022.7.28 - Rename FileSequence.labels attribute to dims (breaking). - Rename tifffile_geodb module to geodb (breaking). - Rename TiffFile._astuple method to astuple (breaking). - Rename noplots command line argument to maxplots (breaking). - Fix reading ImageJ hyperstacks with non-TZC order. - Fix colorspace of JPEG segments encoded by Bio-Formats. - Fix fei_metadata for HELIOS FIB-SEM (#141, needs test). - Add xarray style properties to TiffPage (WIP). - Add option to specify OME-XML for TiffFile. - Add option to control multiscales in ZarrTiffStore. - Support writing to uncompressed ZarrTiffStore. - Support writing empty images with tiling. - Support overwriting some tag values in NDPI (#137). - Support Jetraw compression (experimental). - Standardize resolution parameter and property. - Deprecate third resolution argument on write (use resolutionunit). - Deprecate tuple type compression argument on write (use compressionargs). - Deprecate enums in TIFF namespace (use enums from module). - Improve default number of threads to write compressed segments (#139). - Parse metaseries time values as datetime objects (#143). - Increase internal read and write buffers to 256 MB. - Convert some warnings to debug messages. - Declare all classes final. - Add script to generate documentation via Sphinx. - Convert docstrings to Google style with Sphinx directives. 2022.5.4 - Allow to write NewSubfileType=0 (#132). - Support writing iterators of strip or tile bytes. - Convert iterables (not iterators) to NumPy arrays when writing (breaking). - Explicitly specify optional keyword parameters for imread and imwrite. - Return number of written bytes from FileHandle write functions. 2022.4.28 - Add option to specify fsspec version 1 URL template name (#131). - Ignore invalid dates in UIC tags (#129). - Fix zlib_encode and lzma_encode to work with non-contiguous arrays (#128). - Fix delta_encode to preserve byteorder of ndarrays. - Move Imagecodecs fallback functions to private module and add tests. 2022.4.26 - Fix AttributeError in TiffFile.shaped_metadata (#127). - Fix TiffTag.overwrite with pre-packed binary value. - Write sparse TIFF if tile iterator contains None. - Raise ValueError when writing photometric mode with too few samples. - Improve test coverage. 2022.4.22 - Add type hints for Python 3.10 (WIP). - Fix Mypy errors (breaking). - Mark many parameters positional-only or keyword-only (breaking). - Remove deprecated pages parameter from imread (breaking). - Remove deprecated compress and ijmetadata write parameters (breaking). - Remove deprecated fastij and movie parameters from TiffFile (breaking). - Remove deprecated multifile parameters from TiffFile (breaking). - Remove deprecated tif parameter from TiffTag.overwrite (breaking). - Remove deprecated file parameter from FileSequence.asarray (breaking). - Remove option to pass imread class to FileSequence (breaking). - Remove optional parameters from __str__ functions (breaking). - Rename TiffPageSeries.offset to dataoffset (breaking) - Change TiffPage.pages to None if no SubIFDs are present (breaking). - Change TiffPage.index to int (breaking). - Change TiffPage.is_contiguous, is_imagej, and is_shaped to bool (breaking). - Add TiffPage imagej_description and shaped_description properties. - Add TiffFormat abstract base class. - Deprecate lazyattr and use functools.cached_property instead (breaking). - Julian_datetime raises ValueError for dates before year 1 (breaking). - Regressed import time due to typing. 2022.4.8 - Add _ARRAY_DIMENSIONS attributes to ZarrTiffStore. - Allow C instead of S axis when writing OME-TIFF. - Fix writing OME-TIFF with separate samples. - Fix reading unsqueezed pyramidal OME-TIFF series. 2022.3.25 - Fix another ValueError using ZarrStore with zarr >= 2.11.0 (tiffslide #25). - Add parser for Hamamatsu streak metadata. - Improve hexdump. 2022.3.16 - Use multi-threading to compress strips and tiles. - Raise TiffFileError when reading corrupted strips and tiles (#122). - Fix ScanImage single channel count (#121). - Add parser for AstroTIFF FITS metadata. 2022.2.9 - Fix ValueError using multiscale ZarrStore with zarr >= 2.11.0. - Raise KeyError if ZarrStore does not contain key. - Limit number of warnings for missing files in multifile series. - Allow to save colormap to 32-bit ImageJ files (#115). 2022.2.2 - Fix TypeError when second ImageDescription tag contains non-ASCII (#112). - Fix parsing IJMetadata with many IJMetadataByteCounts (#111). - Detect MicroManager NDTiffv2 header (not tested). - Remove cache from ZarrFileSequenceStore (use zarr.LRUStoreCache). - Raise limit on maximum number of pages. - Use J2K format when encoding JPEG2000 segments. - Formally deprecate imsave and TiffWriter.save. - Drop support for Python 3.7 and NumPy < 1.19 (NEP29). 2021.11.2 - Lazy-load non-essential tag values (breaking). - Warn when reading from closed file. - Support ImageJ prop metadata type (#103). - Support writing indexed ImageJ format. - Fix multi-threaded access of multi-page Zarr stores with chunkmode 2. - Raise error if truncate is used with compression, packints, or tile. - Read STK metadata without UIC2tag. - Improve log and warning messages (WIP). - Improve string representation of large tag values. 2021.10.12 - Revert renaming of file parameter in FileSequence.asarray (breaking). - Deprecate file parameter in FileSequence.asarray. 2021.10.10 - Disallow letters as indices in FileSequence; use categories (breaking). - Do not warn of missing files in FileSequence; use files_missing property. - Support predictors in ZarrTiffStore.write_fsspec. - Add option to specify Zarr group name in write_fsspec. - Add option to specify categories for FileSequence patterns (#76). - Add option to specify chunk shape and dtype for ZarrFileSequenceStore. - Add option to tile ZarrFileSequenceStore and FileSequence.asarray. - Add option to pass additional zattrs to Zarr stores. - Detect Roche BIF files. 2021.8.30 - Fix horizontal differencing with non-native byte order. - Fix multi-threaded access of memory-mappable, multi-page Zarr stores (#67). 2021.8.8 - Fix tag offset and valueoffset for NDPI > 4 GB (#96). 2021.7.30 - Deprecate first parameter to TiffTag.overwrite (no longer required). - TiffTag init API change (breaking). - Detect Ventana BIF series and warn that tiles are not stitched. - Enable reading PreviewImage from RAW formats (#93, #94). - Work around numpy.ndarray.tofile is very slow for non-contiguous arrays. - Fix issues with PackBits compression (requires imagecodecs 2021.7.30). 2021.7.2 - Decode complex integer images found in SAR GeoTIFF. - Support reading NDPI with JPEG-XR compression. - Deprecate TiffWriter RGB auto-detection, except for RGB24/48 and RGBA32/64. 2021.6.14 - Set stacklevel for deprecation warnings (#89). - Fix svs_description_metadata for SVS with double header (#88, breaking). - Fix reading JPEG compressed CMYK images. - Support ALT_JPEG and JPEG_2000_LOSSY compression found in Bio-Formats. - Log warning if TiffWriter auto-detects RGB mode (specify photometric). 2021.6.6 - Fix TIFF.COMPESSOR typo (#85). - Round resolution numbers that do not fit in 64-bit rationals (#81). - Add support for JPEG XL compression. - Add Numcodecs compatible TIFF codec. - Rename ZarrFileStore to ZarrFileSequenceStore (breaking). - Add method to export fsspec ReferenceFileSystem from ZarrFileStore. - Fix fsspec ReferenceFileSystem v1 for multifile series. - Fix creating OME-TIFF with micron character in OME-XML. 2021.4.8 - Fix reading OJPEG with wrong photometric or samplesperpixel tags (#75). - Fix fsspec ReferenceFileSystem v1 and JPEG compression. - Use TiffTagRegistry for NDPI_TAGS, EXIF_TAGS, GPS_TAGS, IOP_TAGS constants. - Make TIFF.GEO_KEYS an Enum (breaking). 2021.3.31 - Use JPEG restart markers as tile offsets in NDPI. - Support version 1 and more codecs in fsspec ReferenceFileSystem (untested). 2021.3.17 - Fix regression reading multi-file OME-TIFF with missing files (#72). - Fix fsspec ReferenceFileSystem with non-native byte order (#56). 2021.3.16 - TIFF is no longer a defended trademark. - Add method to export fsspec ReferenceFileSystem from ZarrTiffStore (#56). 2021.3.5 - Preliminary support for EER format (#68). - Do not warn about unknown compression (#68). 2021.3.4 - Fix reading multi-file, multi-series OME-TIFF (#67). - Detect ScanImage 2021 files (#46). - Shape new version ScanImage series according to metadata (breaking). - Remove Description key from TiffFile.scanimage_metadata dict (breaking). - Also return ScanImage version from read_scanimage_metadata (breaking). - Fix docstrings. 2021.2.26 - Squeeze axes of LSM series by default (breaking). - Add option to preserve single dimensions when reading from series (WIP). - Do not allow appending to OME-TIFF files. - Fix reading STK files without name attribute in metadata. - Make TIFF constants multi-thread safe and pickleable (#64). - Add detection of NDTiffStorage MajorVersion to read_micromanager_metadata. - Support ScanImage v4 files in read_scanimage_metadata. 2021.2.1 - Fix multi-threaded access of ZarrTiffStores using same TiffFile instance. - Use fallback zlib and lzma codecs with imagecodecs lite builds. - Open Olympus and Panasonic RAW files for parsing, albeit not supported. - Support X2 and X4 differencing found in DNG. - Support reading JPEG_LOSSY compression found in DNG. 2021.1.14 - Try ImageJ series if OME series fails (#54) - Add option to use pages as chunks in ZarrFileStore (experimental). - Fix reading from file objects with no readinto function. 2021.1.11 - Fix test errors on PyPy. - Fix decoding bitorder with imagecodecs >= 2021.1.11. 2021.1.8 - Decode float24 using imagecodecs >= 2021.1.8. - Consolidate reading of segments if possible. 2020.12.8 - Fix corrupted ImageDescription in multi shaped series if buffer too small. - Fix libtiff warning that ImageDescription contains null byte in value. - Fix reading invalid files using JPEG compression with palette colorspace. 2020.12.4 - Fix reading some JPEG compressed CFA images. - Make index of SubIFDs a tuple. - Pass through FileSequence.imread arguments in imread. - Do not apply regex flags to FileSequence axes patterns (breaking). 2020.11.26 - Add option to pass axes metadata to ImageJ writer. - Pad incomplete tiles passed to TiffWriter.write (#38). - Split TiffTag constructor (breaking). - Change TiffTag.dtype to TIFF.DATATYPES (breaking). - Add TiffTag.overwrite method. - Add script to change ImageDescription in files. - Add TiffWriter.overwrite_description method (WIP). 2020.11.18 - Support writing SEPARATED color space (#37). - Use imagecodecs.deflate codec if available. - Fix SCN and NDPI series with Z dimensions. - Add TiffReader alias for TiffFile. - TiffPage.is_volumetric returns True if ImageDepth > 1. - Zarr store getitem returns NumPy arrays instead of bytes. 2020.10.1 - Formally deprecate unused TiffFile parameters (scikit-image #4996). 2020.9.30 - Allow to pass additional arguments to compression codecs. - Deprecate TiffWriter.save method (use TiffWriter.write). - Deprecate TiffWriter.save compress parameter (use compression). - Remove multifile parameter from TiffFile (breaking). - Pass all is_flag arguments from imread to TiffFile. - Do not byte-swap JPEG2000, WEBP, PNG, JPEGXR segments in TiffPage.decode. 2020.9.29 - Fix reading files produced by ScanImage > 2015 (#29). 2020.9.28 - Derive ZarrStore from MutableMapping. - Support zero shape ZarrTiffStore. - Fix ZarrFileStore with non-TIFF files. - Fix ZarrFileStore with missing files. - Cache one chunk in ZarrFileStore. - Keep track of already opened files in FileCache. - Change parse_filenames function to return zero-based indices. - Remove reopen parameter from asarray (breaking). - Rename FileSequence.fromfile to imread (breaking). 2020.9.22 - Add experimental Zarr storage interface (WIP). - Remove unused first dimension from TiffPage.shaped (breaking). - Move reading of STK planes to series interface (breaking). - Always use virtual frames for ScanImage files. - Use DimensionOrder to determine axes order in OmeXml. - Enable writing striped volumetric images. - Keep complete dataoffsets and databytecounts for TiffFrames. - Return full size tiles from Tiffpage.segments. - Rename TiffPage.is_sgi property to is_volumetric (breaking). - Rename TiffPageSeries.is_pyramid to is_pyramidal (breaking). - Fix TypeError when passing jpegtables to non-JPEG decode method (#25). 2020.9.3 - Do not write contiguous series by default (breaking). - Allow to write to SubIFDs (WIP). - Fix writing F-contiguous NumPy arrays (#24). 2020.8.25 - Do not convert EPICS timeStamp to datetime object. - Read incompletely written Micro-Manager image file stack header (#23). - Remove tag 51123 values from TiffFile.micromanager_metadata (breaking). 2020.8.13 - Use tifffile metadata over OME and ImageJ for TiffFile.series (breaking). - Fix writing iterable of pages with compression (#20). - Expand error checking of TiffWriter data, dtype, shape, and tile arguments. 2020.7.24 - Parse nested OmeXml metadata argument (WIP). - Do not lazy load TiffFrame JPEGTables. - Fix conditionally skipping some tests. 2020.7.22 - Do not auto-enable OME-TIFF if description is passed to TiffWriter.save. - Raise error writing empty bilevel or tiled images. - Allow to write tiled bilevel images. - Allow to write multi-page TIFF from iterable of single page images (WIP). - Add function to validate OME-XML. - Correct Philips slide width and length. 2020.7.17 - Initial support for writing OME-TIFF (WIP). - Return samples as separate dimension in OME series (breaking). - Fix modulo dimensions for multiple OME series. - Fix some test errors on big endian systems (#18). - Fix BytesWarning. - Allow to pass TIFF.PREDICTOR values to TiffWriter.save. 2020.7.4 - Deprecate support for Python 3.6 (NEP 29). - Move pyramidal subresolution series to TiffPageSeries.levels (breaking). - Add parser for SVS, SCN, NDPI, and QPI pyramidal series. - Read single-file OME-TIFF pyramids. - Read NDPI files > 4 GB (#15). - Include SubIFDs in generic series. - Preliminary support for writing packed integer arrays (#11, WIP). - Read more LSM info subrecords. - Fix missing ReferenceBlackWhite tag for YCbCr photometrics. - Fix reading lossless JPEG compressed DNG files. 2020.6.3 - Support os.PathLike file names (#9). 2020.5.30 - Re-add pure Python PackBits decoder. 2020.5.25 - Make imagecodecs an optional dependency again. - Disable multi-threaded decoding of small LZW compressed segments. - Fix caching of TiffPage.decode method. - Fix xml.etree.cElementTree ImportError on Python 3.9. - Fix tostring DeprecationWarning. 2020.5.11 - Fix reading ImageJ grayscale mode RGB images (#6). - Remove napari reader plugin. 2020.5.7 - Add napari reader plugin (tentative). - Fix writing single tiles larger than image data (#3). - Always store ExtraSamples values in tuple (breaking). 2020.5.5 - Allow to write tiled TIFF from iterable of tiles (WIP). - Add method to iterate over decoded segments of TiffPage (WIP). - Pass chunks of segments to ThreadPoolExecutor.map to reduce memory usage. - Fix reading invalid files with too many strips. - Fix writing over-aligned image data. - Detect OME-XML without declaration (#2). - Support LERC compression (WIP). - Delay load imagecodecs functions. - Remove maxsize parameter from asarray (breaking). - Deprecate ijmetadata parameter from TiffWriter.save (use metadata). 2020.2.16 - Add method to decode individual strips or tiles. - Read strips and tiles in order of their offsets. - Enable multi-threading when decompressing multiple strips. - Replace TiffPage.tags dictionary with TiffTags (breaking). - Replace TIFF.TAGS dictionary with TiffTagRegistry. - Remove TIFF.TAG_NAMES (breaking). - Improve handling of TiffSequence parameters in imread. - Match last uncommon parts of file paths to FileSequence pattern (breaking). - Allow letters in FileSequence pattern for indexing well plate rows. - Allow to reorder axes in FileSequence. - Allow to write > 4 GB arrays to plain TIFF when using compression. - Allow to write zero size NumPy arrays to nonconformant TIFF (tentative). - Fix xml2dict. - Require imagecodecs >= 2020.1.31. - Remove support for imagecodecs-lite (breaking). - Remove verify parameter to asarray method (breaking). - Remove deprecated lzw_decode functions (breaking). - Remove support for Python 2.7 and 3.5 (breaking). 2019.7.26 - Fix infinite loop reading more than two tags of same code in IFD. - Delay import of logging module. 2019.7.20 - Fix OME-XML detection for files created by Imaris. - Remove or replace assert statements. 2019.7.2 - Do not write SampleFormat tag for unsigned data types. - Write ByteCount tag values as SHORT or LONG if possible. - Allow to specify axes in FileSequence pattern via group names. - Add option to concurrently read FileSequence using threads. - Derive TiffSequence from FileSequence. - Use str(datetime.timedelta) to format Timer duration. - Use perf_counter for Timer if possible. 2019.6.18 - Fix reading planar RGB ImageJ files created by Bio-Formats. - Fix reading single-file, multi-image OME-TIFF without UUID. - Presume LSM stores uncompressed images contiguously per page. - Reformat some complex expressions. 2019.5.30 - Ignore invalid frames in OME-TIFF. - Set default subsampling to (2, 2) for RGB JPEG compression. - Fix reading and writing planar RGB JPEG compression. - Replace buffered_read with FileHandle.read_segments. - Include page or frame numbers in exceptions and warnings. - Add Timer class. 2019.5.22 - Add optional chroma subsampling for JPEG compression. - Enable writing PNG, JPEG, JPEGXR, and JPEG2K compression (WIP). - Fix writing tiled images with WebP compression. - Improve handling GeoTIFF sparse files. 2019.3.18 - Fix regression decoding JPEG with RGB photometrics. - Fix reading OME-TIFF files with corrupted but unused pages. - Allow to load TiffFrame without specifying keyframe. - Calculate virtual TiffFrames for non-BigTIFF ScanImage files > 2GB. - Rename property is_chroma_subsampled to is_subsampled (breaking). - Make more attributes and methods private (WIP). 2019.3.8 - Fix MemoryError when RowsPerStrip > ImageLength. - Fix SyntaxWarning on Python 3.8. - Fail to decode JPEG to planar RGB (tentative). - Separate public from private test files (WIP). - Allow testing without data files or imagecodecs. 2019.2.22 - Use imagecodecs-lite as fallback for imagecodecs. - Simplify reading NumPy arrays from file. - Use TiffFrames when reading arrays from page sequences. - Support slices and iterators in TiffPageSeries sequence interface. - Auto-detect uniform series. - Use page hash to determine generic series. - Turn off TiffPages cache (tentative). - Pass through more parameters in imread. - Discontinue movie parameter in imread and TiffFile (breaking). - Discontinue bigsize parameter in imwrite (breaking). - Raise TiffFileError in case of issues with TIFF structure. - Return TiffFile.ome_metadata as XML (breaking). - Ignore OME series when last dimensions are not stored in TIFF pages. 2019.2.10 - Assemble IFDs in memory to speed-up writing on some slow media. - Handle discontinued arguments fastij, multifile_close, and pages. 2019.1.30 - Use black background in imshow. - Do not write datetime tag by default (breaking). - Fix OME-TIFF with SamplesPerPixel > 1. - Allow 64-bit IFD offsets for NDPI (files > 4GB still not supported). 2019.1.4 - Fix decoding deflate without imagecodecs. 2019.1.1 - Update copyright year. - Require imagecodecs >= 2018.12.16. - Do not use JPEG tables from keyframe. - Enable decoding large JPEG in NDPI. - Decode some old-style JPEG. - Reorder OME channel axis to match PlanarConfiguration storage. - Return tiled images as contiguous arrays. - Add decode_lzw proxy function for compatibility with old czifile module. - Use dedicated logger. 2018.11.28 - Make SubIFDs accessible as TiffPage.pages. - Make parsing of TiffSequence axes pattern optional (breaking). - Limit parsing of TiffSequence axes pattern to file names, not path names. - Do not interpolate in imshow if image dimensions <= 512, else use bilinear. - Use logging.warning instead of warnings.warn in many cases. - Fix NumPy FutureWarning for out == 'memmap'. - Adjust ZSTD and WebP compression to libtiff-4.0.10 (WIP). - Decode old-style LZW with imagecodecs >= 2018.11.8. - Remove TiffFile.qptiff_metadata (QPI metadata are per page). - Do not use keyword arguments before variable positional arguments. - Make either all or none return statements in function return expression. - Use pytest parametrize to generate tests. - Replace test classes with functions. 2018.11.6 - Rename imsave function to imwrite. - Re-add Python implementations of packints, delta, and bitorder codecs. - Fix TiffFrame.compression AttributeError. 2018.10.18 - Rename tiffile package to tifffile. 2018.10.10 - Read ZIF, the Zoomable Image Format (WIP). - Decode YCbCr JPEG as RGB (tentative). - Improve restoration of incomplete tiles. - Allow to write grayscale with extrasamples without specifying planarconfig. - Enable decoding of PNG and JXR via imagecodecs. - Deprecate 32-bit platforms (too many memory errors during tests). 2018.9.27 - Read Olympus SIS (WIP). - Allow to write non-BigTIFF files up to ~4 GB (fix). - Fix parsing date and time fields in SEM metadata. - Detect some circular IFD references. - Enable WebP codecs via imagecodecs. - Add option to read TiffSequence from ZIP containers. - Remove TiffFile.isnative. - Move TIFF struct format constants out of TiffFile namespace. 2018.8.31 - Fix wrong TiffTag.valueoffset. - Towards reading Hamamatsu NDPI (WIP). - Enable PackBits compression of byte and bool arrays. - Fix parsing NULL terminated CZ_SEM strings. 2018.8.24 - Move tifffile.py and related modules into tiffile package. - Move usage examples to module docstring. - Enable multi-threading for compressed tiles and pages by default. - Add option to concurrently decode image tiles using threads. - Do not skip empty tiles (fix). - Read JPEG and J2K compressed strips and tiles. - Allow floating-point predictor on write. - Add option to specify subfiletype on write. - Depend on imagecodecs package instead of _tifffile, lzma, etc modules. - Remove reverse_bitorder, unpack_ints, and decode functions. - Use pytest instead of unittest. 2018.6.20 - Save RGBA with unassociated extrasample by default (breaking). - Add option to specify ExtraSamples values. 2018.6.17 (included with 0.15.1) - Towards reading JPEG and other compressions via imagecodecs package (WIP). - Read SampleFormat VOID as UINT. - Add function to validate TIFF using `jhove -m TIFF-hul`. - Save bool arrays as bilevel TIFF. - Accept pathlib.Path as filenames. - Move software argument from TiffWriter __init__ to save. - Raise DOS limit to 16 TB. - Lazy load LZMA and ZSTD compressors and decompressors. - Add option to save IJMetadata tags. - Return correct number of pages for truncated series (fix). - Move EXIF tags to TIFF.TAG as per TIFF/EP standard. 2018.2.18 - Always save RowsPerStrip and Resolution tags as required by TIFF standard. - Do not use badly typed ImageDescription. - Coerce bad ASCII string tags to bytes. - Tuning of __str__ functions. - Fix reading undefined tag values. - Read and write ZSTD compressed data. - Use hexdump to print bytes. - Determine TIFF byte order from data dtype in imsave. - Add option to specify RowsPerStrip for compressed strips. - Allow memory-map of arrays with non-native byte order. - Attempt to handle ScanImage <= 5.1 files. - Restore TiffPageSeries.pages sequence interface. - Use numpy.frombuffer instead of fromstring to read from binary data. - Parse GeoTIFF metadata. - Add option to apply horizontal differencing before compression. - Towards reading PerkinElmer QPI (QPTIFF, no test files). - Do not index out of bounds data in tifffile.c unpackbits and decodelzw. 2017.9.29 - Many backward incompatible changes improving speed and resource usage: - Add detail argument to __str__ function. Remove info functions. - Fix potential issue correcting offsets of large LSM files with positions. - Remove TiffFile sequence interface; use TiffFile.pages instead. - Do not make tag values available as TiffPage attributes. - Use str (not bytes) type for tag and metadata strings (WIP). - Use documented standard tag and value names (WIP). - Use enums for some documented TIFF tag values. - Remove memmap and tmpfile options; use out='memmap' instead. - Add option to specify output in asarray functions. - Add option to concurrently decode pages using threads. - Add TiffPage.asrgb function (WIP). - Do not apply colormap in asarray. - Remove colormapped, rgbonly, and scale_mdgel options from asarray. - Consolidate metadata in TiffFile _metadata functions. - Remove non-tag metadata properties from TiffPage. - Add function to convert LSM to tiled BIN files. - Align image data in file. - Make TiffPage.dtype a numpy.dtype. - Add ndim and size properties to TiffPage and TiffPageSeries. - Allow imsave to write non-BigTIFF files up to ~4 GB. - Only read one page for shaped series if possible. - Add memmap function to create memory-mapped array stored in TIFF file. - Add option to save empty arrays to TIFF files. - Add option to save truncated TIFF files. - Allow single tile images to be saved contiguously. - Add optional movie mode for files with uniform pages. - Lazy load pages. - Use lightweight TiffFrame for IFDs sharing properties with key TiffPage. - Move module constants to TIFF namespace (speed up module import). - Remove fastij option from TiffFile. - Remove pages parameter from TiffFile. - Remove TIFFfile alias. - Deprecate Python 2. - Require enum34 and futures packages on Python 2.7. - Remove Record class and return all metadata as dict instead. - Add functions to parse STK, MetaSeries, ScanImage, SVS, Pilatus metadata. - Read tags from EXIF and GPS IFDs. - Use pformat for tag and metadata values. - Fix reading some UIC tags. - Do not modify input array in imshow (fix). - Fix Python implementation of unpack_ints. 2017.5.23 - Write correct number of SampleFormat values (fix). - Use Adobe deflate code to write ZIP compressed files. - Add option to pass tag values as packed binary data for writing. - Defer tag validation to attribute access. - Use property instead of lazyattr decorator for simple expressions. 2017.3.17 - Write IFDs and tag values on word boundaries. - Read ScanImage metadata. - Remove is_rgb and is_indexed attributes from TiffFile. - Create files used by doctests. 2017.1.12 (included with scikit-image 0.14.x) - Read Zeiss SEM metadata. - Read OME-TIFF with invalid references to external files. - Rewrite C LZW decoder (5x faster). - Read corrupted LSM files missing EOI code in LZW stream. 2017.1.1 - Add option to append images to existing TIFF files. - Read files without pages. - Read S-FEG and Helios NanoLab tags created by FEI software. - Allow saving Color Filter Array (CFA) images. - Add info functions returning more information about TiffFile and TiffPage. - Add option to read specific pages only. - Remove maxpages argument (breaking). - Remove test_tifffile function. 2016.10.28 - Improve detection of ImageJ hyperstacks. - Read TVIPS metadata created by EM-MENU (by Marco Oster). - Add option to disable using OME-XML metadata. - Allow non-integer range attributes in modulo tags (by Stuart Berg). 2016.6.21 - Do not always memmap contiguous data in page series. 2016.5.13 - Add option to specify resolution unit. - Write grayscale images with extra samples when planarconfig is specified. - Do not write RGB color images with 2 samples. - Reorder TiffWriter.save keyword arguments (breaking). 2016.4.18 - TiffWriter, imread, and imsave accept open binary file streams. 2016.04.13 - Fix reversed fill order in 2 and 4 bps images. - Implement reverse_bitorder in C. 2016.03.18 - Fix saving additional ImageJ metadata. 2016.2.22 - Write 8 bytes double tag values using offset if necessary (bug fix). - Add option to disable writing second image description tag. - Detect tags with incorrect counts. - Disable color mapping for LSM. 2015.11.13 - Read LSM 6 mosaics. - Add option to specify directory of memory-mapped files. - Add command line options to specify vmin and vmax values for colormapping. 2015.10.06 - New helper function to apply colormaps. - Renamed is_palette attributes to is_indexed (breaking). - Color-mapped samples are now contiguous (breaking). - Do not color-map ImageJ hyperstacks (breaking). - Towards reading Leica SCN. 2015.9.25 - Read images with reversed bit order (FillOrder is LSB2MSB). 2015.9.21 - Read RGB OME-TIFF. - Warn about malformed OME-XML. 2015.9.16 - Detect some corrupted ImageJ metadata. - Better axes labels for shaped files. - Do not create TiffTag for default values. - Chroma subsampling is not supported. - Memory-map data in TiffPageSeries if possible (optional). 2015.8.17 - Write ImageJ hyperstacks (optional). - Read and write LZMA compressed data. - Specify datetime when saving (optional). - Save tiled and color-mapped images (optional). - Ignore void bytecounts and offsets if possible. - Ignore bogus image_depth tag created by ISS Vista software. - Decode floating-point horizontal differencing (not tiled). - Save image data contiguously if possible. - Only read first IFD from ImageJ files if possible. - Read ImageJ raw format (files larger than 4 GB). - TiffPageSeries class for pages with compatible shape and data type. - Try to read incomplete tiles. - Open file dialog if no filename is passed on command line. - Ignore errors when decoding OME-XML. - Rename decoder functions (breaking). 2014.8.24 - TiffWriter class for incremental writing images. - Simplify examples. 2014.8.19 - Add memmap function to FileHandle. - Add function to determine if image data in TiffPage is memory-mappable. - Do not close files if multifile_close parameter is False. 2014.8.10 - Return all extrasamples by default (breaking). - Read data from series of pages into memory-mapped array (optional). - Squeeze OME dimensions (breaking). - Workaround missing EOI code in strips. - Support image and tile depth tags (SGI extension). - Better handling of STK/UIC tags (breaking). - Disable color mapping for STK. - Julian to datetime converter. - TIFF ASCII type may be NULL separated. - Unwrap strip offsets for LSM files greater than 4 GB. - Correct strip byte counts in compressed LSM files. - Skip missing files in OME series. - Read embedded TIFF files. 2014.2.05 - Save rational numbers as type 5 (bug fix). 2013.12.20 - Keep other files in OME multi-file series closed. - FileHandle class to abstract binary file handle. - Disable color mapping for bad OME-TIFF produced by bio-formats. - Read bad OME-XML produced by ImageJ when cropping. 2013.11.3 - Allow zlib compress data in imsave function (optional). - Memory-map contiguous image data (optional). 2013.10.28 - Read MicroManager metadata and little-endian ImageJ tag. - Save extra tags in imsave function. - Save tags in ascending order by code (bug fix). 2012.10.18 - Accept file like objects (read from OIB files). 2012.8.21 - Rename TIFFfile to TiffFile and TIFFpage to TiffPage. - TiffSequence class for reading sequence of TIFF files. - Read UltraQuant tags. - Allow float numbers as resolution in imsave function. 2012.8.3 - Read MD GEL tags and NIH Image header. 2012.7.25 - Read ImageJ tags. - … ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1743283852.0 tifffile-2025.3.30/LICENSE0000666000000000000000000000302714772063214011565 0ustar00BSD 3-Clause License Copyright (c) 2008-2025, Christoph Gohlke All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: 1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. 2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. 3. Neither the name of the copyright holder nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1691722131.0 tifffile-2025.3.30/MANIFEST.in0000666000000000000000000000107114465320623012312 0ustar00include LICENSE include README.rst include CHANGES.rst include ACKNOWLEDGEMENTS.rst include tifffile/py.typed exclude *.cmd recursive-exclude doc * recursive-exclude test * recursive-exclude tests * recursive-exclude * __pycache__ recursive-exclude * *.py[co] recursive-exclude * *- recursive-exclude test/data * recursive-exclude test/_tmp * include tests/conftest.py include tests/test_tifffile.py include examples/earthbigdata.py include examples/issue125.py include docs/conf.py include docs/make.py include docs/_static/custom.css ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1743283853.8097856 tifffile-2025.3.30/PKG-INFO0000666000000000000000000007671314772063216011673 0ustar00Metadata-Version: 2.4 Name: tifffile Version: 2025.3.30 Summary: Read and write TIFF files Home-page: https://www.cgohlke.com Author: Christoph Gohlke Author-email: cgohlke@cgohlke.com License: BSD-3-Clause Project-URL: Bug Tracker, https://github.com/cgohlke/tifffile/issues Project-URL: Source Code, https://github.com/cgohlke/tifffile Platform: any Classifier: Development Status :: 4 - Beta Classifier: Intended Audience :: Science/Research Classifier: Intended Audience :: Developers Classifier: Operating System :: OS Independent Classifier: Programming Language :: Python :: 3 :: Only Classifier: Programming Language :: Python :: 3.10 Classifier: Programming Language :: Python :: 3.11 Classifier: Programming Language :: Python :: 3.12 Classifier: Programming Language :: Python :: 3.13 Requires-Python: >=3.10 Description-Content-Type: text/x-rst License-File: LICENSE Requires-Dist: numpy Provides-Extra: codecs Requires-Dist: imagecodecs>=2024.12.30; extra == "codecs" Provides-Extra: xml Requires-Dist: defusedxml; extra == "xml" Requires-Dist: lxml; extra == "xml" Provides-Extra: zarr Requires-Dist: zarr<3; extra == "zarr" Requires-Dist: fsspec; extra == "zarr" Provides-Extra: plot Requires-Dist: matplotlib; extra == "plot" Provides-Extra: all Requires-Dist: imagecodecs>=2024.12.30; extra == "all" Requires-Dist: matplotlib; extra == "all" Requires-Dist: defusedxml; extra == "all" Requires-Dist: lxml; extra == "all" Requires-Dist: zarr<3; extra == "all" Requires-Dist: fsspec; extra == "all" Provides-Extra: test Requires-Dist: pytest; extra == "test" Requires-Dist: imagecodecs; extra == "test" Requires-Dist: czifile; extra == "test" Requires-Dist: cmapfile; extra == "test" Requires-Dist: oiffile; extra == "test" Requires-Dist: lfdfiles; extra == "test" Requires-Dist: psdtags; extra == "test" Requires-Dist: roifile; extra == "test" Requires-Dist: lxml; extra == "test" Requires-Dist: zarr<3; extra == "test" Requires-Dist: dask; extra == "test" Requires-Dist: xarray; extra == "test" Requires-Dist: fsspec; extra == "test" Requires-Dist: defusedxml; extra == "test" Requires-Dist: ndtiff; extra == "test" Dynamic: author Dynamic: author-email Dynamic: classifier Dynamic: description Dynamic: description-content-type Dynamic: home-page Dynamic: license Dynamic: license-file Dynamic: platform Dynamic: project-url Dynamic: provides-extra Dynamic: requires-dist Dynamic: requires-python Dynamic: summary Read and write TIFF files ========================= Tifffile is a Python library to (1) store NumPy arrays in TIFF (Tagged Image File Format) files, and (2) read image and metadata from TIFF-like files used in bioimaging. Image and metadata can be read from TIFF, BigTIFF, OME-TIFF, GeoTIFF, Adobe DNG, ZIF (Zoomable Image File Format), MetaMorph STK, Zeiss LSM, ImageJ hyperstack, Micro-Manager MMStack and NDTiff, SGI, NIHImage, Olympus FluoView and SIS, ScanImage, Molecular Dynamics GEL, Aperio SVS, Leica SCN, Roche BIF, PerkinElmer QPTIFF (QPI, PKI), Hamamatsu NDPI, Argos AVS, and Philips DP formatted files. Image data can be read as NumPy arrays or Zarr 2 arrays/groups from strips, tiles, pages (IFDs), SubIFDs, higher-order series, and pyramidal levels. Image data can be written to TIFF, BigTIFF, OME-TIFF, and ImageJ hyperstack compatible files in multi-page, volumetric, pyramidal, memory-mappable, tiled, predicted, or compressed form. Many compression and predictor schemes are supported via the imagecodecs library, including LZW, PackBits, Deflate, PIXTIFF, LZMA, LERC, Zstd, JPEG (8 and 12-bit, lossless), JPEG 2000, JPEG XR, JPEG XL, WebP, PNG, EER, Jetraw, 24-bit floating-point, and horizontal differencing. Tifffile can also be used to inspect TIFF structures, read image data from multi-dimensional file sequences, write fsspec ReferenceFileSystem for TIFF files and image file sequences, patch TIFF tag values, and parse many proprietary metadata formats. :Author: `Christoph Gohlke `_ :License: BSD 3-Clause :Version: 2025.3.30 :DOI: `10.5281/zenodo.6795860 `_ Quickstart ---------- Install the tifffile package and all dependencies from the `Python Package Index `_:: python -m pip install -U tifffile[all] Tifffile is also available in other package repositories such as Anaconda, Debian, and MSYS2. The tifffile library is type annotated and documented via docstrings:: python -c "import tifffile; help(tifffile)" Tifffile can be used as a console script to inspect and preview TIFF files:: python -m tifffile --help See `Examples`_ for using the programming interface. Source code and support are available on `GitHub `_. Support is also provided on the `image.sc `_ forum. Requirements ------------ This revision was tested with the following requirements and dependencies (other versions may work): - `CPython `_ 3.10.11, 3.11.9, 3.12.9, 3.13.2 64-bit - `NumPy `_ 2.2.4 - `Imagecodecs `_ 2025.3.30 (required for encoding or decoding LZW, JPEG, etc. compressed segments) - `Matplotlib `_ 3.10.1 (required for plotting) - `Lxml `_ 5.3.1 (required only for validating and printing XML) - `Zarr `_ 2.18.5 (required only for opening Zarr stores; Zarr 3 is not compatible) - `Fsspec `_ 2025.2.0 (required only for opening ReferenceFileSystem files) Revisions --------- 2025.3.30 - Pass 5110 tests. - Fix for imagecodecs 2025.3.30. 2025.3.13 - Change bytes2str to decode only up to first NULL character (breaking). - Remove stripnull function calls to reduce overhead (#285). - Deprecate stripnull function. 2025.2.18 - Fix julian_datetime milliseconds (#283). - Remove deprecated dtype arguments from imread and FileSequence (breaking). - Remove deprecated imsave and TiffWriter.save function/method (breaking). - Remove deprecated option to pass multiple values to compression (breaking). - Remove deprecated option to pass unit to resolution (breaking). - Remove deprecated enums from TIFF namespace (breaking). - Remove deprecated lazyattr and squeeze_axes functions (breaking). 2025.1.10 - Improve type hints. - Deprecate Python 3.10. 2024.12.12 - Read PlaneProperty from STK UIC1Tag (#280). - Allow 'None' as alias for COMPRESSION.NONE and PREDICTOR.NONE (#274). - Zarr 3 is not supported (#272). 2024.9.20 - Fix writing colormap to ImageJ files (breaking). - Improve typing. - Drop support for Python 3.9. 2024.8.30 - Support writing OME Dataset and some StructuredAnnotations elements. 2024.8.28 - Fix LSM scan types and dimension orders (#269, breaking). - Use IO[bytes] instead of BinaryIO for typing (#268). 2024.8.24 - Do not remove trailing length-1 dimension writing non-shaped file (breaking). - Fix writing OME-TIFF with certain modulo axes orders. - Make imshow NaN aware. 2024.8.10 - Relax bitspersample check for JPEG, JPEG2K, and JPEGXL compression (#265). 2024.7.24 - Fix reading contiguous multi-page series via Zarr store (#67). 2024.7.21 - Fix integer overflow in product function caused by numpy types. - Allow tag reader functions to fail. 2024.7.2 - Enable memmap to create empty files with non-native byte order. - Deprecate Python 3.9, support Python 3.13. 2024.6.18 - Ensure TiffPage.nodata is castable to dtype (breaking, #260). - Support Argos AVS slides. 2024.5.22 - Derive TiffPages, TiffPageSeries, FileSequence, StoredShape from Sequence. - Truncate circular IFD chain, do not raise TiffFileError (breaking). - Deprecate access to TiffPages.pages and FileSequence.files. - Enable DeprecationWarning for enums in TIFF namespace. - Remove some deprecated code (breaking). - Add iccprofile property to TiffPage and parameter to TiffWriter.write. - Do not detect VSI as SIS format. - Limit length of logged exception messages. - Fix docstring examples not correctly rendered on GitHub (#254, #255). 2024.5.10 - Support reading JPEGXL compression in DNG 1.7. - Read invalid TIFF created by IDEAS software. 2024.5.3 - Fix reading incompletely written LSM. - Fix reading Philips DP with extra rows of tiles (#253, breaking). 2024.4.24 - Fix compatibility issue with numpy 2 (#252). 2024.4.18 - Fix write_fsspec when last row of tiles is missing in Philips slide (#249). - Add option not to quote file names in write_fsspec. - Allow compressing bilevel images with deflate, LZMA, and Zstd. 2024.2.12 - Deprecate dtype, add chunkdtype parameter in FileSequence.asarray. - Add imreadargs parameters passed to FileSequence.imread. 2024.1.30 - Fix compatibility issue with numpy 2 (#238). - Enable DeprecationWarning for tuple compression argument. - Parse sequence of numbers in xml2dict. 2023.12.9 - … Refer to the CHANGES file for older revisions. Notes ----- TIFF, the Tagged Image File Format, was created by the Aldus Corporation and Adobe Systems Incorporated. Tifffile supports a subset of the TIFF6 specification, mainly 8, 16, 32, and 64-bit integer, 16, 32, and 64-bit float, grayscale and multi-sample images. Specifically, CCITT and OJPEG compression, chroma subsampling without JPEG compression, color space transformations, samples with differing types, or IPTC, ICC, and XMP metadata are not implemented. Besides classic TIFF, tifffile supports several TIFF-like formats that do not strictly adhere to the TIFF6 specification. Some formats allow file and data sizes to exceed the 4 GB limit of the classic TIFF: - **BigTIFF** is identified by version number 43 and uses different file header, IFD, and tag structures with 64-bit offsets. The format also adds 64-bit data types. Tifffile can read and write BigTIFF files. - **ImageJ hyperstacks** store all image data, which may exceed 4 GB, contiguously after the first IFD. Files > 4 GB contain one IFD only. The size and shape of the up to 6-dimensional image data can be determined from the ImageDescription tag of the first IFD, which is Latin-1 encoded. Tifffile can read and write ImageJ hyperstacks. - **OME-TIFF** files store up to 8-dimensional image data in one or multiple TIFF or BigTIFF files. The UTF-8 encoded OME-XML metadata found in the ImageDescription tag of the first IFD defines the position of TIFF IFDs in the high-dimensional image data. Tifffile can read OME-TIFF files (except multi-file pyramidal) and write NumPy arrays to single-file OME-TIFF. - **Micro-Manager NDTiff** stores multi-dimensional image data in one or more classic TIFF files. Metadata contained in a separate NDTiff.index binary file defines the position of the TIFF IFDs in the image array. Each TIFF file also contains metadata in a non-TIFF binary structure at offset 8. Downsampled image data of pyramidal datasets are stored in separate folders. Tifffile can read NDTiff files. Version 0 and 1 series, tiling, stitching, and multi-resolution pyramids are not supported. - **Micro-Manager MMStack** stores 6-dimensional image data in one or more classic TIFF files. Metadata contained in non-TIFF binary structures and JSON strings define the image stack dimensions and the position of the image frame data in the file and the image stack. The TIFF structures and metadata are often corrupted or wrong. Tifffile can read MMStack files. - **Carl Zeiss LSM** files store all IFDs below 4 GB and wrap around 32-bit StripOffsets pointing to image data above 4 GB. The StripOffsets of each series and position require separate unwrapping. The StripByteCounts tag contains the number of bytes for the uncompressed data. Tifffile can read LSM files of any size. - **MetaMorph Stack, STK** files contain additional image planes stored contiguously after the image data of the first page. The total number of planes is equal to the count of the UIC2tag. Tifffile can read STK files. - **ZIF**, the Zoomable Image File format, is a subspecification of BigTIFF with SGI's ImageDepth extension and additional compression schemes. Only little-endian, tiled, interleaved, 8-bit per sample images with JPEG, PNG, JPEG XR, and JPEG 2000 compression are allowed. Tifffile can read and write ZIF files. - **Hamamatsu NDPI** files use some 64-bit offsets in the file header, IFD, and tag structures. Single, LONG typed tag values can exceed 32-bit. The high bytes of 64-bit tag values and offsets are stored after IFD structures. Tifffile can read NDPI files > 4 GB. JPEG compressed segments with dimensions >65530 or missing restart markers cannot be decoded with common JPEG libraries. Tifffile works around this limitation by separately decoding the MCUs between restart markers, which performs poorly. BitsPerSample, SamplesPerPixel, and PhotometricInterpretation tags may contain wrong values, which can be corrected using the value of tag 65441. - **Philips TIFF** slides store padded ImageWidth and ImageLength tag values for tiled pages. The values can be corrected using the DICOM_PIXEL_SPACING attributes of the XML formatted description of the first page. Tile offsets and byte counts may be 0. Tifffile can read Philips slides. - **Ventana/Roche BIF** slides store tiles and metadata in a BigTIFF container. Tiles may overlap and require stitching based on the TileJointInfo elements in the XMP tag. Volumetric scans are stored using the ImageDepth extension. Tifffile can read BIF and decode individual tiles but does not perform stitching. - **ScanImage** optionally allows corrupted non-BigTIFF files > 2 GB. The values of StripOffsets and StripByteCounts can be recovered using the constant differences of the offsets of IFD and tag values throughout the file. Tifffile can read such files if the image data are stored contiguously in each page. - **GeoTIFF sparse** files allow strip or tile offsets and byte counts to be 0. Such segments are implicitly set to 0 or the NODATA value on reading. Tifffile can read GeoTIFF sparse files. - **Tifffile shaped** files store the array shape and user-provided metadata of multi-dimensional image series in JSON format in the ImageDescription tag of the first page of the series. The format allows multiple series, SubIFDs, sparse segments with zero offset and byte count, and truncated series, where only the first page of a series is present, and the image data are stored contiguously. No other software besides Tifffile supports the truncated format. Other libraries for reading, writing, inspecting, or manipulating scientific TIFF files from Python are `aicsimageio `_, `apeer-ometiff-library `_, `bigtiff `_, `fabio.TiffIO `_, `GDAL `_, `imread `_, `large_image `_, `openslide-python `_, `opentile `_, `pylibtiff `_, `pylsm `_, `pymimage `_, `python-bioformats `_, `pytiff `_, `scanimagetiffreader-python `_, `SimpleITK `_, `slideio `_, `tiffslide `_, `tifftools `_, `tyf `_, `xtiff `_, and `ndtiff `_. References ---------- - TIFF 6.0 Specification and Supplements. Adobe Systems Incorporated. https://www.adobe.io/open/standards/TIFF.html https://download.osgeo.org/libtiff/doc/ - TIFF File Format FAQ. https://www.awaresystems.be/imaging/tiff/faq.html - The BigTIFF File Format. https://www.awaresystems.be/imaging/tiff/bigtiff.html - MetaMorph Stack (STK) Image File Format. http://mdc.custhelp.com/app/answers/detail/a_id/18862 - Image File Format Description LSM 5/7 Release 6.0 (ZEN 2010). Carl Zeiss MicroImaging GmbH. BioSciences. May 10, 2011 - The OME-TIFF format. https://docs.openmicroscopy.org/ome-model/latest/ - UltraQuant(r) Version 6.0 for Windows Start-Up Guide. http://www.ultralum.com/images%20ultralum/pdf/UQStart%20Up%20Guide.pdf - Micro-Manager File Formats. https://micro-manager.org/wiki/Micro-Manager_File_Formats - ScanImage BigTiff Specification. https://docs.scanimage.org/Appendix/ScanImage+BigTiff+Specification.html - ZIF, the Zoomable Image File format. https://zif.photo/ - GeoTIFF File Format https://gdal.org/drivers/raster/gtiff.html - Cloud optimized GeoTIFF. https://github.com/cogeotiff/cog-spec/blob/master/spec.md - Tags for TIFF and Related Specifications. Digital Preservation. https://www.loc.gov/preservation/digital/formats/content/tiff_tags.shtml - CIPA DC-008-2016: Exchangeable image file format for digital still cameras: Exif Version 2.31. http://www.cipa.jp/std/documents/e/DC-008-Translation-2016-E.pdf - The EER (Electron Event Representation) file format. https://github.com/fei-company/EerReaderLib - Digital Negative (DNG) Specification. Version 1.7.1.0, September 2023. https://helpx.adobe.com/content/dam/help/en/photoshop/pdf/DNG_Spec_1_7_1_0.pdf - Roche Digital Pathology. BIF image file format for digital pathology. https://diagnostics.roche.com/content/dam/diagnostics/Blueprint/en/pdf/rmd/Roche-Digital-Pathology-BIF-Whitepaper.pdf - Astro-TIFF specification. https://astro-tiff.sourceforge.io/ - Aperio Technologies, Inc. Digital Slides and Third-Party Data Interchange. Aperio_Digital_Slides_and_Third-party_data_interchange.pdf - PerkinElmer image format. https://downloads.openmicroscopy.org/images/Vectra-QPTIFF/perkinelmer/PKI_Image%20Format.docx - NDTiffStorage. https://github.com/micro-manager/NDTiffStorage - Argos AVS File Format. https://github.com/user-attachments/files/15580286/ARGOS.AVS.File.Format.pdf Examples -------- Write a NumPy array to a single-page RGB TIFF file: >>> data = numpy.random.randint(0, 255, (256, 256, 3), 'uint8') >>> imwrite('temp.tif', data, photometric='rgb') Read the image from the TIFF file as NumPy array: >>> image = imread('temp.tif') >>> image.shape (256, 256, 3) Use the `photometric` and `planarconfig` arguments to write a 3x3x3 NumPy array to an interleaved RGB, a planar RGB, or a 3-page grayscale TIFF: >>> data = numpy.random.randint(0, 255, (3, 3, 3), 'uint8') >>> imwrite('temp.tif', data, photometric='rgb') >>> imwrite('temp.tif', data, photometric='rgb', planarconfig='separate') >>> imwrite('temp.tif', data, photometric='minisblack') Use the `extrasamples` argument to specify how extra components are interpreted, for example, for an RGBA image with unassociated alpha channel: >>> data = numpy.random.randint(0, 255, (256, 256, 4), 'uint8') >>> imwrite('temp.tif', data, photometric='rgb', extrasamples=['unassalpha']) Write a 3-dimensional NumPy array to a multi-page, 16-bit grayscale TIFF file: >>> data = numpy.random.randint(0, 2**12, (64, 301, 219), 'uint16') >>> imwrite('temp.tif', data, photometric='minisblack') Read the whole image stack from the multi-page TIFF file as NumPy array: >>> image_stack = imread('temp.tif') >>> image_stack.shape (64, 301, 219) >>> image_stack.dtype dtype('uint16') Read the image from the first page in the TIFF file as NumPy array: >>> image = imread('temp.tif', key=0) >>> image.shape (301, 219) Read images from a selected range of pages: >>> images = imread('temp.tif', key=range(4, 40, 2)) >>> images.shape (18, 301, 219) Iterate over all pages in the TIFF file and successively read images: >>> with TiffFile('temp.tif') as tif: ... for page in tif.pages: ... image = page.asarray() ... Get information about the image stack in the TIFF file without reading any image data: >>> tif = TiffFile('temp.tif') >>> len(tif.pages) # number of pages in the file 64 >>> page = tif.pages[0] # get shape and dtype of image in first page >>> page.shape (301, 219) >>> page.dtype dtype('uint16') >>> page.axes 'YX' >>> series = tif.series[0] # get shape and dtype of first image series >>> series.shape (64, 301, 219) >>> series.dtype dtype('uint16') >>> series.axes 'QYX' >>> tif.close() Inspect the "XResolution" tag from the first page in the TIFF file: >>> with TiffFile('temp.tif') as tif: ... tag = tif.pages[0].tags['XResolution'] ... >>> tag.value (1, 1) >>> tag.name 'XResolution' >>> tag.code 282 >>> tag.count 1 >>> tag.dtype Iterate over all tags in the TIFF file: >>> with TiffFile('temp.tif') as tif: ... for page in tif.pages: ... for tag in page.tags: ... tag_name, tag_value = tag.name, tag.value ... Overwrite the value of an existing tag, for example, XResolution: >>> with TiffFile('temp.tif', mode='r+') as tif: ... _ = tif.pages[0].tags['XResolution'].overwrite((96000, 1000)) ... Write a 5-dimensional floating-point array using BigTIFF format, separate color components, tiling, Zlib compression level 8, horizontal differencing predictor, and additional metadata: >>> data = numpy.random.rand(2, 5, 3, 301, 219).astype('float32') >>> imwrite( ... 'temp.tif', ... data, ... bigtiff=True, ... photometric='rgb', ... planarconfig='separate', ... tile=(32, 32), ... compression='zlib', ... compressionargs={'level': 8}, ... predictor=True, ... metadata={'axes': 'TZCYX'}, ... ) Write a 10 fps time series of volumes with xyz voxel size 2.6755x2.6755x3.9474 micron^3 to an ImageJ hyperstack formatted TIFF file: >>> volume = numpy.random.randn(6, 57, 256, 256).astype('float32') >>> image_labels = [f'{i}' for i in range(volume.shape[0] * volume.shape[1])] >>> imwrite( ... 'temp.tif', ... volume, ... imagej=True, ... resolution=(1.0 / 2.6755, 1.0 / 2.6755), ... metadata={ ... 'spacing': 3.947368, ... 'unit': 'um', ... 'finterval': 1 / 10, ... 'fps': 10.0, ... 'axes': 'TZYX', ... 'Labels': image_labels, ... }, ... ) Read the volume and metadata from the ImageJ hyperstack file: >>> with TiffFile('temp.tif') as tif: ... volume = tif.asarray() ... axes = tif.series[0].axes ... imagej_metadata = tif.imagej_metadata ... >>> volume.shape (6, 57, 256, 256) >>> axes 'TZYX' >>> imagej_metadata['slices'] 57 >>> imagej_metadata['frames'] 6 Memory-map the contiguous image data in the ImageJ hyperstack file: >>> memmap_volume = memmap('temp.tif') >>> memmap_volume.shape (6, 57, 256, 256) >>> del memmap_volume Create a TIFF file containing an empty image and write to the memory-mapped NumPy array (note: this does not work with compression or tiling): >>> memmap_image = memmap( ... 'temp.tif', shape=(256, 256, 3), dtype='float32', photometric='rgb' ... ) >>> type(memmap_image) >>> memmap_image[255, 255, 1] = 1.0 >>> memmap_image.flush() >>> del memmap_image Write two NumPy arrays to a multi-series TIFF file (note: other TIFF readers will not recognize the two series; use the OME-TIFF format for better interoperability): >>> series0 = numpy.random.randint(0, 255, (32, 32, 3), 'uint8') >>> series1 = numpy.random.randint(0, 255, (4, 256, 256), 'uint16') >>> with TiffWriter('temp.tif') as tif: ... tif.write(series0, photometric='rgb') ... tif.write(series1, photometric='minisblack') ... Read the second image series from the TIFF file: >>> series1 = imread('temp.tif', series=1) >>> series1.shape (4, 256, 256) Successively write the frames of one contiguous series to a TIFF file: >>> data = numpy.random.randint(0, 255, (30, 301, 219), 'uint8') >>> with TiffWriter('temp.tif') as tif: ... for frame in data: ... tif.write(frame, contiguous=True) ... Append an image series to the existing TIFF file (note: this does not work with ImageJ hyperstack or OME-TIFF files): >>> data = numpy.random.randint(0, 255, (301, 219, 3), 'uint8') >>> imwrite('temp.tif', data, photometric='rgb', append=True) Create a TIFF file from a generator of tiles: >>> data = numpy.random.randint(0, 2**12, (31, 33, 3), 'uint16') >>> def tiles(data, tileshape): ... for y in range(0, data.shape[0], tileshape[0]): ... for x in range(0, data.shape[1], tileshape[1]): ... yield data[y : y + tileshape[0], x : x + tileshape[1]] ... >>> imwrite( ... 'temp.tif', ... tiles(data, (16, 16)), ... tile=(16, 16), ... shape=data.shape, ... dtype=data.dtype, ... photometric='rgb', ... ) Write a multi-dimensional, multi-resolution (pyramidal), multi-series OME-TIFF file with optional metadata. Sub-resolution images are written to SubIFDs. Limit parallel encoding to 2 threads. Write a thumbnail image as a separate image series: >>> data = numpy.random.randint(0, 255, (8, 2, 512, 512, 3), 'uint8') >>> subresolutions = 2 >>> pixelsize = 0.29 # micrometer >>> with TiffWriter('temp.ome.tif', bigtiff=True) as tif: ... metadata = { ... 'axes': 'TCYXS', ... 'SignificantBits': 8, ... 'TimeIncrement': 0.1, ... 'TimeIncrementUnit': 's', ... 'PhysicalSizeX': pixelsize, ... 'PhysicalSizeXUnit': 'µm', ... 'PhysicalSizeY': pixelsize, ... 'PhysicalSizeYUnit': 'µm', ... 'Channel': {'Name': ['Channel 1', 'Channel 2']}, ... 'Plane': {'PositionX': [0.0] * 16, 'PositionXUnit': ['µm'] * 16}, ... 'Description': 'A multi-dimensional, multi-resolution image', ... 'MapAnnotation': { # for OMERO ... 'Namespace': 'openmicroscopy.org/PyramidResolution', ... '1': '256 256', ... '2': '128 128', ... }, ... } ... options = dict( ... photometric='rgb', ... tile=(128, 128), ... compression='jpeg', ... resolutionunit='CENTIMETER', ... maxworkers=2, ... ) ... tif.write( ... data, ... subifds=subresolutions, ... resolution=(1e4 / pixelsize, 1e4 / pixelsize), ... metadata=metadata, ... **options, ... ) ... # write pyramid levels to the two subifds ... # in production use resampling to generate sub-resolution images ... for level in range(subresolutions): ... mag = 2 ** (level + 1) ... tif.write( ... data[..., ::mag, ::mag, :], ... subfiletype=1, ... resolution=(1e4 / mag / pixelsize, 1e4 / mag / pixelsize), ... **options, ... ) ... # add a thumbnail image as a separate series ... # it is recognized by QuPath as an associated image ... thumbnail = (data[0, 0, ::8, ::8] >> 2).astype('uint8') ... tif.write(thumbnail, metadata={'Name': 'thumbnail'}) ... Access the image levels in the pyramidal OME-TIFF file: >>> baseimage = imread('temp.ome.tif') >>> second_level = imread('temp.ome.tif', series=0, level=1) >>> with TiffFile('temp.ome.tif') as tif: ... baseimage = tif.series[0].asarray() ... second_level = tif.series[0].levels[1].asarray() ... number_levels = len(tif.series[0].levels) # includes base level ... Iterate over and decode single JPEG compressed tiles in the TIFF file: >>> with TiffFile('temp.ome.tif') as tif: ... fh = tif.filehandle ... for page in tif.pages: ... for index, (offset, bytecount) in enumerate( ... zip(page.dataoffsets, page.databytecounts) ... ): ... _ = fh.seek(offset) ... data = fh.read(bytecount) ... tile, indices, shape = page.decode( ... data, index, jpegtables=page.jpegtables ... ) ... Use Zarr 2 to read parts of the tiled, pyramidal images in the TIFF file: >>> import zarr >>> store = imread('temp.ome.tif', aszarr=True) >>> z = zarr.open(store, mode='r') >>> z >>> z[0] # base layer >>> z[0][2, 0, 128:384, 256:].shape # read a tile from the base layer (256, 256, 3) >>> store.close() Load the base layer from the Zarr 2 store as a dask array: >>> import dask.array >>> store = imread('temp.ome.tif', aszarr=True) >>> dask.array.from_zarr(store, 0) dask.array<...shape=(8, 2, 512, 512, 3)...chunksize=(1, 1, 128, 128, 3)... >>> store.close() Write the Zarr 2 store to a fsspec ReferenceFileSystem in JSON format: >>> store = imread('temp.ome.tif', aszarr=True) >>> store.write_fsspec('temp.ome.tif.json', url='file://') >>> store.close() Open the fsspec ReferenceFileSystem as a Zarr group: >>> import fsspec >>> import imagecodecs.numcodecs >>> imagecodecs.numcodecs.register_codecs() >>> mapper = fsspec.get_mapper( ... 'reference://', fo='temp.ome.tif.json', target_protocol='file' ... ) >>> z = zarr.open(mapper, mode='r') >>> z Create an OME-TIFF file containing an empty, tiled image series and write to it via the Zarr 2 interface (note: this does not work with compression): >>> imwrite( ... 'temp2.ome.tif', ... shape=(8, 800, 600), ... dtype='uint16', ... photometric='minisblack', ... tile=(128, 128), ... metadata={'axes': 'CYX'}, ... ) >>> store = imread('temp2.ome.tif', mode='r+', aszarr=True) >>> z = zarr.open(store, mode='r+') >>> z >>> z[3, 100:200, 200:300:2] = 1024 >>> store.close() Read images from a sequence of TIFF files as NumPy array using two I/O worker threads: >>> imwrite('temp_C001T001.tif', numpy.random.rand(64, 64)) >>> imwrite('temp_C001T002.tif', numpy.random.rand(64, 64)) >>> image_sequence = imread( ... ['temp_C001T001.tif', 'temp_C001T002.tif'], ioworkers=2, maxworkers=1 ... ) >>> image_sequence.shape (2, 64, 64) >>> image_sequence.dtype dtype('float64') Read an image stack from a series of TIFF files with a file name pattern as NumPy or Zarr 2 arrays: >>> image_sequence = TiffSequence('temp_C0*.tif', pattern=r'_(C)(\d+)(T)(\d+)') >>> image_sequence.shape (1, 2) >>> image_sequence.axes 'CT' >>> data = image_sequence.asarray() >>> data.shape (1, 2, 64, 64) >>> store = image_sequence.aszarr() >>> zarr.open(store, mode='r') >>> image_sequence.close() Write the Zarr 2 store to a fsspec ReferenceFileSystem in JSON format: >>> store = image_sequence.aszarr() >>> store.write_fsspec('temp.json', url='file://') Open the fsspec ReferenceFileSystem as a Zarr 2 array: >>> import fsspec >>> import tifffile.numcodecs >>> tifffile.numcodecs.register_codec() >>> mapper = fsspec.get_mapper( ... 'reference://', fo='temp.json', target_protocol='file' ... ) >>> zarr.open(mapper, mode='r') Inspect the TIFF file from the command line:: $ python -m tifffile temp.ome.tif ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1743283852.0 tifffile-2025.3.30/README.rst0000666000000000000000000007611714772063214012261 0ustar00.. This file is generated by setup.py Read and write TIFF files ========================= Tifffile is a Python library to (1) store NumPy arrays in TIFF (Tagged Image File Format) files, and (2) read image and metadata from TIFF-like files used in bioimaging. Image and metadata can be read from TIFF, BigTIFF, OME-TIFF, GeoTIFF, Adobe DNG, ZIF (Zoomable Image File Format), MetaMorph STK, Zeiss LSM, ImageJ hyperstack, Micro-Manager MMStack and NDTiff, SGI, NIHImage, Olympus FluoView and SIS, ScanImage, Molecular Dynamics GEL, Aperio SVS, Leica SCN, Roche BIF, PerkinElmer QPTIFF (QPI, PKI), Hamamatsu NDPI, Argos AVS, and Philips DP formatted files. Image data can be read as NumPy arrays or Zarr 2 arrays/groups from strips, tiles, pages (IFDs), SubIFDs, higher-order series, and pyramidal levels. Image data can be written to TIFF, BigTIFF, OME-TIFF, and ImageJ hyperstack compatible files in multi-page, volumetric, pyramidal, memory-mappable, tiled, predicted, or compressed form. Many compression and predictor schemes are supported via the imagecodecs library, including LZW, PackBits, Deflate, PIXTIFF, LZMA, LERC, Zstd, JPEG (8 and 12-bit, lossless), JPEG 2000, JPEG XR, JPEG XL, WebP, PNG, EER, Jetraw, 24-bit floating-point, and horizontal differencing. Tifffile can also be used to inspect TIFF structures, read image data from multi-dimensional file sequences, write fsspec ReferenceFileSystem for TIFF files and image file sequences, patch TIFF tag values, and parse many proprietary metadata formats. :Author: `Christoph Gohlke `_ :License: BSD 3-Clause :Version: 2025.3.30 :DOI: `10.5281/zenodo.6795860 `_ Quickstart ---------- Install the tifffile package and all dependencies from the `Python Package Index `_:: python -m pip install -U tifffile[all] Tifffile is also available in other package repositories such as Anaconda, Debian, and MSYS2. The tifffile library is type annotated and documented via docstrings:: python -c "import tifffile; help(tifffile)" Tifffile can be used as a console script to inspect and preview TIFF files:: python -m tifffile --help See `Examples`_ for using the programming interface. Source code and support are available on `GitHub `_. Support is also provided on the `image.sc `_ forum. Requirements ------------ This revision was tested with the following requirements and dependencies (other versions may work): - `CPython `_ 3.10.11, 3.11.9, 3.12.9, 3.13.2 64-bit - `NumPy `_ 2.2.4 - `Imagecodecs `_ 2025.3.30 (required for encoding or decoding LZW, JPEG, etc. compressed segments) - `Matplotlib `_ 3.10.1 (required for plotting) - `Lxml `_ 5.3.1 (required only for validating and printing XML) - `Zarr `_ 2.18.5 (required only for opening Zarr stores; Zarr 3 is not compatible) - `Fsspec `_ 2025.2.0 (required only for opening ReferenceFileSystem files) Revisions --------- 2025.3.30 - Pass 5110 tests. - Fix for imagecodecs 2025.3.30. 2025.3.13 - Change bytes2str to decode only up to first NULL character (breaking). - Remove stripnull function calls to reduce overhead (#285). - Deprecate stripnull function. 2025.2.18 - Fix julian_datetime milliseconds (#283). - Remove deprecated dtype arguments from imread and FileSequence (breaking). - Remove deprecated imsave and TiffWriter.save function/method (breaking). - Remove deprecated option to pass multiple values to compression (breaking). - Remove deprecated option to pass unit to resolution (breaking). - Remove deprecated enums from TIFF namespace (breaking). - Remove deprecated lazyattr and squeeze_axes functions (breaking). 2025.1.10 - Improve type hints. - Deprecate Python 3.10. 2024.12.12 - Read PlaneProperty from STK UIC1Tag (#280). - Allow 'None' as alias for COMPRESSION.NONE and PREDICTOR.NONE (#274). - Zarr 3 is not supported (#272). 2024.9.20 - Fix writing colormap to ImageJ files (breaking). - Improve typing. - Drop support for Python 3.9. 2024.8.30 - Support writing OME Dataset and some StructuredAnnotations elements. 2024.8.28 - Fix LSM scan types and dimension orders (#269, breaking). - Use IO[bytes] instead of BinaryIO for typing (#268). 2024.8.24 - Do not remove trailing length-1 dimension writing non-shaped file (breaking). - Fix writing OME-TIFF with certain modulo axes orders. - Make imshow NaN aware. 2024.8.10 - Relax bitspersample check for JPEG, JPEG2K, and JPEGXL compression (#265). 2024.7.24 - Fix reading contiguous multi-page series via Zarr store (#67). 2024.7.21 - Fix integer overflow in product function caused by numpy types. - Allow tag reader functions to fail. 2024.7.2 - Enable memmap to create empty files with non-native byte order. - Deprecate Python 3.9, support Python 3.13. 2024.6.18 - Ensure TiffPage.nodata is castable to dtype (breaking, #260). - Support Argos AVS slides. 2024.5.22 - Derive TiffPages, TiffPageSeries, FileSequence, StoredShape from Sequence. - Truncate circular IFD chain, do not raise TiffFileError (breaking). - Deprecate access to TiffPages.pages and FileSequence.files. - Enable DeprecationWarning for enums in TIFF namespace. - Remove some deprecated code (breaking). - Add iccprofile property to TiffPage and parameter to TiffWriter.write. - Do not detect VSI as SIS format. - Limit length of logged exception messages. - Fix docstring examples not correctly rendered on GitHub (#254, #255). 2024.5.10 - Support reading JPEGXL compression in DNG 1.7. - Read invalid TIFF created by IDEAS software. 2024.5.3 - Fix reading incompletely written LSM. - Fix reading Philips DP with extra rows of tiles (#253, breaking). 2024.4.24 - Fix compatibility issue with numpy 2 (#252). 2024.4.18 - Fix write_fsspec when last row of tiles is missing in Philips slide (#249). - Add option not to quote file names in write_fsspec. - Allow compressing bilevel images with deflate, LZMA, and Zstd. 2024.2.12 - Deprecate dtype, add chunkdtype parameter in FileSequence.asarray. - Add imreadargs parameters passed to FileSequence.imread. 2024.1.30 - Fix compatibility issue with numpy 2 (#238). - Enable DeprecationWarning for tuple compression argument. - Parse sequence of numbers in xml2dict. 2023.12.9 - … Refer to the CHANGES file for older revisions. Notes ----- TIFF, the Tagged Image File Format, was created by the Aldus Corporation and Adobe Systems Incorporated. Tifffile supports a subset of the TIFF6 specification, mainly 8, 16, 32, and 64-bit integer, 16, 32, and 64-bit float, grayscale and multi-sample images. Specifically, CCITT and OJPEG compression, chroma subsampling without JPEG compression, color space transformations, samples with differing types, or IPTC, ICC, and XMP metadata are not implemented. Besides classic TIFF, tifffile supports several TIFF-like formats that do not strictly adhere to the TIFF6 specification. Some formats allow file and data sizes to exceed the 4 GB limit of the classic TIFF: - **BigTIFF** is identified by version number 43 and uses different file header, IFD, and tag structures with 64-bit offsets. The format also adds 64-bit data types. Tifffile can read and write BigTIFF files. - **ImageJ hyperstacks** store all image data, which may exceed 4 GB, contiguously after the first IFD. Files > 4 GB contain one IFD only. The size and shape of the up to 6-dimensional image data can be determined from the ImageDescription tag of the first IFD, which is Latin-1 encoded. Tifffile can read and write ImageJ hyperstacks. - **OME-TIFF** files store up to 8-dimensional image data in one or multiple TIFF or BigTIFF files. The UTF-8 encoded OME-XML metadata found in the ImageDescription tag of the first IFD defines the position of TIFF IFDs in the high-dimensional image data. Tifffile can read OME-TIFF files (except multi-file pyramidal) and write NumPy arrays to single-file OME-TIFF. - **Micro-Manager NDTiff** stores multi-dimensional image data in one or more classic TIFF files. Metadata contained in a separate NDTiff.index binary file defines the position of the TIFF IFDs in the image array. Each TIFF file also contains metadata in a non-TIFF binary structure at offset 8. Downsampled image data of pyramidal datasets are stored in separate folders. Tifffile can read NDTiff files. Version 0 and 1 series, tiling, stitching, and multi-resolution pyramids are not supported. - **Micro-Manager MMStack** stores 6-dimensional image data in one or more classic TIFF files. Metadata contained in non-TIFF binary structures and JSON strings define the image stack dimensions and the position of the image frame data in the file and the image stack. The TIFF structures and metadata are often corrupted or wrong. Tifffile can read MMStack files. - **Carl Zeiss LSM** files store all IFDs below 4 GB and wrap around 32-bit StripOffsets pointing to image data above 4 GB. The StripOffsets of each series and position require separate unwrapping. The StripByteCounts tag contains the number of bytes for the uncompressed data. Tifffile can read LSM files of any size. - **MetaMorph Stack, STK** files contain additional image planes stored contiguously after the image data of the first page. The total number of planes is equal to the count of the UIC2tag. Tifffile can read STK files. - **ZIF**, the Zoomable Image File format, is a subspecification of BigTIFF with SGI's ImageDepth extension and additional compression schemes. Only little-endian, tiled, interleaved, 8-bit per sample images with JPEG, PNG, JPEG XR, and JPEG 2000 compression are allowed. Tifffile can read and write ZIF files. - **Hamamatsu NDPI** files use some 64-bit offsets in the file header, IFD, and tag structures. Single, LONG typed tag values can exceed 32-bit. The high bytes of 64-bit tag values and offsets are stored after IFD structures. Tifffile can read NDPI files > 4 GB. JPEG compressed segments with dimensions >65530 or missing restart markers cannot be decoded with common JPEG libraries. Tifffile works around this limitation by separately decoding the MCUs between restart markers, which performs poorly. BitsPerSample, SamplesPerPixel, and PhotometricInterpretation tags may contain wrong values, which can be corrected using the value of tag 65441. - **Philips TIFF** slides store padded ImageWidth and ImageLength tag values for tiled pages. The values can be corrected using the DICOM_PIXEL_SPACING attributes of the XML formatted description of the first page. Tile offsets and byte counts may be 0. Tifffile can read Philips slides. - **Ventana/Roche BIF** slides store tiles and metadata in a BigTIFF container. Tiles may overlap and require stitching based on the TileJointInfo elements in the XMP tag. Volumetric scans are stored using the ImageDepth extension. Tifffile can read BIF and decode individual tiles but does not perform stitching. - **ScanImage** optionally allows corrupted non-BigTIFF files > 2 GB. The values of StripOffsets and StripByteCounts can be recovered using the constant differences of the offsets of IFD and tag values throughout the file. Tifffile can read such files if the image data are stored contiguously in each page. - **GeoTIFF sparse** files allow strip or tile offsets and byte counts to be 0. Such segments are implicitly set to 0 or the NODATA value on reading. Tifffile can read GeoTIFF sparse files. - **Tifffile shaped** files store the array shape and user-provided metadata of multi-dimensional image series in JSON format in the ImageDescription tag of the first page of the series. The format allows multiple series, SubIFDs, sparse segments with zero offset and byte count, and truncated series, where only the first page of a series is present, and the image data are stored contiguously. No other software besides Tifffile supports the truncated format. Other libraries for reading, writing, inspecting, or manipulating scientific TIFF files from Python are `aicsimageio `_, `apeer-ometiff-library `_, `bigtiff `_, `fabio.TiffIO `_, `GDAL `_, `imread `_, `large_image `_, `openslide-python `_, `opentile `_, `pylibtiff `_, `pylsm `_, `pymimage `_, `python-bioformats `_, `pytiff `_, `scanimagetiffreader-python `_, `SimpleITK `_, `slideio `_, `tiffslide `_, `tifftools `_, `tyf `_, `xtiff `_, and `ndtiff `_. References ---------- - TIFF 6.0 Specification and Supplements. Adobe Systems Incorporated. https://www.adobe.io/open/standards/TIFF.html https://download.osgeo.org/libtiff/doc/ - TIFF File Format FAQ. https://www.awaresystems.be/imaging/tiff/faq.html - The BigTIFF File Format. https://www.awaresystems.be/imaging/tiff/bigtiff.html - MetaMorph Stack (STK) Image File Format. http://mdc.custhelp.com/app/answers/detail/a_id/18862 - Image File Format Description LSM 5/7 Release 6.0 (ZEN 2010). Carl Zeiss MicroImaging GmbH. BioSciences. May 10, 2011 - The OME-TIFF format. https://docs.openmicroscopy.org/ome-model/latest/ - UltraQuant(r) Version 6.0 for Windows Start-Up Guide. http://www.ultralum.com/images%20ultralum/pdf/UQStart%20Up%20Guide.pdf - Micro-Manager File Formats. https://micro-manager.org/wiki/Micro-Manager_File_Formats - ScanImage BigTiff Specification. https://docs.scanimage.org/Appendix/ScanImage+BigTiff+Specification.html - ZIF, the Zoomable Image File format. https://zif.photo/ - GeoTIFF File Format https://gdal.org/drivers/raster/gtiff.html - Cloud optimized GeoTIFF. https://github.com/cogeotiff/cog-spec/blob/master/spec.md - Tags for TIFF and Related Specifications. Digital Preservation. https://www.loc.gov/preservation/digital/formats/content/tiff_tags.shtml - CIPA DC-008-2016: Exchangeable image file format for digital still cameras: Exif Version 2.31. http://www.cipa.jp/std/documents/e/DC-008-Translation-2016-E.pdf - The EER (Electron Event Representation) file format. https://github.com/fei-company/EerReaderLib - Digital Negative (DNG) Specification. Version 1.7.1.0, September 2023. https://helpx.adobe.com/content/dam/help/en/photoshop/pdf/DNG_Spec_1_7_1_0.pdf - Roche Digital Pathology. BIF image file format for digital pathology. https://diagnostics.roche.com/content/dam/diagnostics/Blueprint/en/pdf/rmd/Roche-Digital-Pathology-BIF-Whitepaper.pdf - Astro-TIFF specification. https://astro-tiff.sourceforge.io/ - Aperio Technologies, Inc. Digital Slides and Third-Party Data Interchange. Aperio_Digital_Slides_and_Third-party_data_interchange.pdf - PerkinElmer image format. https://downloads.openmicroscopy.org/images/Vectra-QPTIFF/perkinelmer/PKI_Image%20Format.docx - NDTiffStorage. https://github.com/micro-manager/NDTiffStorage - Argos AVS File Format. https://github.com/user-attachments/files/15580286/ARGOS.AVS.File.Format.pdf Examples -------- Write a NumPy array to a single-page RGB TIFF file: .. code-block:: python >>> data = numpy.random.randint(0, 255, (256, 256, 3), 'uint8') >>> imwrite('temp.tif', data, photometric='rgb') Read the image from the TIFF file as NumPy array: .. code-block:: python >>> image = imread('temp.tif') >>> image.shape (256, 256, 3) Use the `photometric` and `planarconfig` arguments to write a 3x3x3 NumPy array to an interleaved RGB, a planar RGB, or a 3-page grayscale TIFF: .. code-block:: python >>> data = numpy.random.randint(0, 255, (3, 3, 3), 'uint8') >>> imwrite('temp.tif', data, photometric='rgb') >>> imwrite('temp.tif', data, photometric='rgb', planarconfig='separate') >>> imwrite('temp.tif', data, photometric='minisblack') Use the `extrasamples` argument to specify how extra components are interpreted, for example, for an RGBA image with unassociated alpha channel: .. code-block:: python >>> data = numpy.random.randint(0, 255, (256, 256, 4), 'uint8') >>> imwrite('temp.tif', data, photometric='rgb', extrasamples=['unassalpha']) Write a 3-dimensional NumPy array to a multi-page, 16-bit grayscale TIFF file: .. code-block:: python >>> data = numpy.random.randint(0, 2**12, (64, 301, 219), 'uint16') >>> imwrite('temp.tif', data, photometric='minisblack') Read the whole image stack from the multi-page TIFF file as NumPy array: .. code-block:: python >>> image_stack = imread('temp.tif') >>> image_stack.shape (64, 301, 219) >>> image_stack.dtype dtype('uint16') Read the image from the first page in the TIFF file as NumPy array: .. code-block:: python >>> image = imread('temp.tif', key=0) >>> image.shape (301, 219) Read images from a selected range of pages: .. code-block:: python >>> images = imread('temp.tif', key=range(4, 40, 2)) >>> images.shape (18, 301, 219) Iterate over all pages in the TIFF file and successively read images: .. code-block:: python >>> with TiffFile('temp.tif') as tif: ... for page in tif.pages: ... image = page.asarray() ... Get information about the image stack in the TIFF file without reading any image data: .. code-block:: python >>> tif = TiffFile('temp.tif') >>> len(tif.pages) # number of pages in the file 64 >>> page = tif.pages[0] # get shape and dtype of image in first page >>> page.shape (301, 219) >>> page.dtype dtype('uint16') >>> page.axes 'YX' >>> series = tif.series[0] # get shape and dtype of first image series >>> series.shape (64, 301, 219) >>> series.dtype dtype('uint16') >>> series.axes 'QYX' >>> tif.close() Inspect the "XResolution" tag from the first page in the TIFF file: .. code-block:: python >>> with TiffFile('temp.tif') as tif: ... tag = tif.pages[0].tags['XResolution'] ... >>> tag.value (1, 1) >>> tag.name 'XResolution' >>> tag.code 282 >>> tag.count 1 >>> tag.dtype Iterate over all tags in the TIFF file: .. code-block:: python >>> with TiffFile('temp.tif') as tif: ... for page in tif.pages: ... for tag in page.tags: ... tag_name, tag_value = tag.name, tag.value ... Overwrite the value of an existing tag, for example, XResolution: .. code-block:: python >>> with TiffFile('temp.tif', mode='r+') as tif: ... _ = tif.pages[0].tags['XResolution'].overwrite((96000, 1000)) ... Write a 5-dimensional floating-point array using BigTIFF format, separate color components, tiling, Zlib compression level 8, horizontal differencing predictor, and additional metadata: .. code-block:: python >>> data = numpy.random.rand(2, 5, 3, 301, 219).astype('float32') >>> imwrite( ... 'temp.tif', ... data, ... bigtiff=True, ... photometric='rgb', ... planarconfig='separate', ... tile=(32, 32), ... compression='zlib', ... compressionargs={'level': 8}, ... predictor=True, ... metadata={'axes': 'TZCYX'}, ... ) Write a 10 fps time series of volumes with xyz voxel size 2.6755x2.6755x3.9474 micron^3 to an ImageJ hyperstack formatted TIFF file: .. code-block:: python >>> volume = numpy.random.randn(6, 57, 256, 256).astype('float32') >>> image_labels = [f'{i}' for i in range(volume.shape[0] * volume.shape[1])] >>> imwrite( ... 'temp.tif', ... volume, ... imagej=True, ... resolution=(1.0 / 2.6755, 1.0 / 2.6755), ... metadata={ ... 'spacing': 3.947368, ... 'unit': 'um', ... 'finterval': 1 / 10, ... 'fps': 10.0, ... 'axes': 'TZYX', ... 'Labels': image_labels, ... }, ... ) Read the volume and metadata from the ImageJ hyperstack file: .. code-block:: python >>> with TiffFile('temp.tif') as tif: ... volume = tif.asarray() ... axes = tif.series[0].axes ... imagej_metadata = tif.imagej_metadata ... >>> volume.shape (6, 57, 256, 256) >>> axes 'TZYX' >>> imagej_metadata['slices'] 57 >>> imagej_metadata['frames'] 6 Memory-map the contiguous image data in the ImageJ hyperstack file: .. code-block:: python >>> memmap_volume = memmap('temp.tif') >>> memmap_volume.shape (6, 57, 256, 256) >>> del memmap_volume Create a TIFF file containing an empty image and write to the memory-mapped NumPy array (note: this does not work with compression or tiling): .. code-block:: python >>> memmap_image = memmap( ... 'temp.tif', shape=(256, 256, 3), dtype='float32', photometric='rgb' ... ) >>> type(memmap_image) >>> memmap_image[255, 255, 1] = 1.0 >>> memmap_image.flush() >>> del memmap_image Write two NumPy arrays to a multi-series TIFF file (note: other TIFF readers will not recognize the two series; use the OME-TIFF format for better interoperability): .. code-block:: python >>> series0 = numpy.random.randint(0, 255, (32, 32, 3), 'uint8') >>> series1 = numpy.random.randint(0, 255, (4, 256, 256), 'uint16') >>> with TiffWriter('temp.tif') as tif: ... tif.write(series0, photometric='rgb') ... tif.write(series1, photometric='minisblack') ... Read the second image series from the TIFF file: .. code-block:: python >>> series1 = imread('temp.tif', series=1) >>> series1.shape (4, 256, 256) Successively write the frames of one contiguous series to a TIFF file: .. code-block:: python >>> data = numpy.random.randint(0, 255, (30, 301, 219), 'uint8') >>> with TiffWriter('temp.tif') as tif: ... for frame in data: ... tif.write(frame, contiguous=True) ... Append an image series to the existing TIFF file (note: this does not work with ImageJ hyperstack or OME-TIFF files): .. code-block:: python >>> data = numpy.random.randint(0, 255, (301, 219, 3), 'uint8') >>> imwrite('temp.tif', data, photometric='rgb', append=True) Create a TIFF file from a generator of tiles: .. code-block:: python >>> data = numpy.random.randint(0, 2**12, (31, 33, 3), 'uint16') >>> def tiles(data, tileshape): ... for y in range(0, data.shape[0], tileshape[0]): ... for x in range(0, data.shape[1], tileshape[1]): ... yield data[y : y + tileshape[0], x : x + tileshape[1]] ... >>> imwrite( ... 'temp.tif', ... tiles(data, (16, 16)), ... tile=(16, 16), ... shape=data.shape, ... dtype=data.dtype, ... photometric='rgb', ... ) Write a multi-dimensional, multi-resolution (pyramidal), multi-series OME-TIFF file with optional metadata. Sub-resolution images are written to SubIFDs. Limit parallel encoding to 2 threads. Write a thumbnail image as a separate image series: .. code-block:: python >>> data = numpy.random.randint(0, 255, (8, 2, 512, 512, 3), 'uint8') >>> subresolutions = 2 >>> pixelsize = 0.29 # micrometer >>> with TiffWriter('temp.ome.tif', bigtiff=True) as tif: ... metadata = { ... 'axes': 'TCYXS', ... 'SignificantBits': 8, ... 'TimeIncrement': 0.1, ... 'TimeIncrementUnit': 's', ... 'PhysicalSizeX': pixelsize, ... 'PhysicalSizeXUnit': 'µm', ... 'PhysicalSizeY': pixelsize, ... 'PhysicalSizeYUnit': 'µm', ... 'Channel': {'Name': ['Channel 1', 'Channel 2']}, ... 'Plane': {'PositionX': [0.0] * 16, 'PositionXUnit': ['µm'] * 16}, ... 'Description': 'A multi-dimensional, multi-resolution image', ... 'MapAnnotation': { # for OMERO ... 'Namespace': 'openmicroscopy.org/PyramidResolution', ... '1': '256 256', ... '2': '128 128', ... }, ... } ... options = dict( ... photometric='rgb', ... tile=(128, 128), ... compression='jpeg', ... resolutionunit='CENTIMETER', ... maxworkers=2, ... ) ... tif.write( ... data, ... subifds=subresolutions, ... resolution=(1e4 / pixelsize, 1e4 / pixelsize), ... metadata=metadata, ... **options, ... ) ... # write pyramid levels to the two subifds ... # in production use resampling to generate sub-resolution images ... for level in range(subresolutions): ... mag = 2 ** (level + 1) ... tif.write( ... data[..., ::mag, ::mag, :], ... subfiletype=1, ... resolution=(1e4 / mag / pixelsize, 1e4 / mag / pixelsize), ... **options, ... ) ... # add a thumbnail image as a separate series ... # it is recognized by QuPath as an associated image ... thumbnail = (data[0, 0, ::8, ::8] >> 2).astype('uint8') ... tif.write(thumbnail, metadata={'Name': 'thumbnail'}) ... Access the image levels in the pyramidal OME-TIFF file: .. code-block:: python >>> baseimage = imread('temp.ome.tif') >>> second_level = imread('temp.ome.tif', series=0, level=1) >>> with TiffFile('temp.ome.tif') as tif: ... baseimage = tif.series[0].asarray() ... second_level = tif.series[0].levels[1].asarray() ... number_levels = len(tif.series[0].levels) # includes base level ... Iterate over and decode single JPEG compressed tiles in the TIFF file: .. code-block:: python >>> with TiffFile('temp.ome.tif') as tif: ... fh = tif.filehandle ... for page in tif.pages: ... for index, (offset, bytecount) in enumerate( ... zip(page.dataoffsets, page.databytecounts) ... ): ... _ = fh.seek(offset) ... data = fh.read(bytecount) ... tile, indices, shape = page.decode( ... data, index, jpegtables=page.jpegtables ... ) ... Use Zarr 2 to read parts of the tiled, pyramidal images in the TIFF file: .. code-block:: python >>> import zarr >>> store = imread('temp.ome.tif', aszarr=True) >>> z = zarr.open(store, mode='r') >>> z >>> z[0] # base layer >>> z[0][2, 0, 128:384, 256:].shape # read a tile from the base layer (256, 256, 3) >>> store.close() Load the base layer from the Zarr 2 store as a dask array: .. code-block:: python >>> import dask.array >>> store = imread('temp.ome.tif', aszarr=True) >>> dask.array.from_zarr(store, 0) dask.array<...shape=(8, 2, 512, 512, 3)...chunksize=(1, 1, 128, 128, 3)... >>> store.close() Write the Zarr 2 store to a fsspec ReferenceFileSystem in JSON format: .. code-block:: python >>> store = imread('temp.ome.tif', aszarr=True) >>> store.write_fsspec('temp.ome.tif.json', url='file://') >>> store.close() Open the fsspec ReferenceFileSystem as a Zarr group: .. code-block:: python >>> import fsspec >>> import imagecodecs.numcodecs >>> imagecodecs.numcodecs.register_codecs() >>> mapper = fsspec.get_mapper( ... 'reference://', fo='temp.ome.tif.json', target_protocol='file' ... ) >>> z = zarr.open(mapper, mode='r') >>> z Create an OME-TIFF file containing an empty, tiled image series and write to it via the Zarr 2 interface (note: this does not work with compression): .. code-block:: python >>> imwrite( ... 'temp2.ome.tif', ... shape=(8, 800, 600), ... dtype='uint16', ... photometric='minisblack', ... tile=(128, 128), ... metadata={'axes': 'CYX'}, ... ) >>> store = imread('temp2.ome.tif', mode='r+', aszarr=True) >>> z = zarr.open(store, mode='r+') >>> z >>> z[3, 100:200, 200:300:2] = 1024 >>> store.close() Read images from a sequence of TIFF files as NumPy array using two I/O worker threads: .. code-block:: python >>> imwrite('temp_C001T001.tif', numpy.random.rand(64, 64)) >>> imwrite('temp_C001T002.tif', numpy.random.rand(64, 64)) >>> image_sequence = imread( ... ['temp_C001T001.tif', 'temp_C001T002.tif'], ioworkers=2, maxworkers=1 ... ) >>> image_sequence.shape (2, 64, 64) >>> image_sequence.dtype dtype('float64') Read an image stack from a series of TIFF files with a file name pattern as NumPy or Zarr 2 arrays: .. code-block:: python >>> image_sequence = TiffSequence('temp_C0*.tif', pattern=r'_(C)(\d+)(T)(\d+)') >>> image_sequence.shape (1, 2) >>> image_sequence.axes 'CT' >>> data = image_sequence.asarray() >>> data.shape (1, 2, 64, 64) >>> store = image_sequence.aszarr() >>> zarr.open(store, mode='r') >>> image_sequence.close() Write the Zarr 2 store to a fsspec ReferenceFileSystem in JSON format: .. code-block:: python >>> store = image_sequence.aszarr() >>> store.write_fsspec('temp.json', url='file://') Open the fsspec ReferenceFileSystem as a Zarr 2 array: .. code-block:: python >>> import fsspec >>> import tifffile.numcodecs >>> tifffile.numcodecs.register_codec() >>> mapper = fsspec.get_mapper( ... 'reference://', fo='temp.json', target_protocol='file' ... ) >>> zarr.open(mapper, mode='r') Inspect the TIFF file from the command line:: $ python -m tifffile temp.ome.tif././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1743283853.7997153 tifffile-2025.3.30/docs/0000777000000000000000000000000014772063216011510 5ustar00././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1743283853.7997153 tifffile-2025.3.30/docs/_static/0000777000000000000000000000000014772063216013136 5ustar00././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1653766411.0 tifffile-2025.3.30/docs/_static/custom.css0000666000000000000000000000017714244474413015166 0ustar00dl { margin: 0; margin-top: 1em; margin-right: 0px; margin-bottom: 0px; margin-left: 0px; padding: 0; }././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1735692675.0 tifffile-2025.3.30/docs/conf.py0000666000000000000000000000211214735110603012773 0ustar00# tifffile/docs/conf.py import os import sys here = os.path.dirname(__file__) sys.path.insert(0, os.path.split(here)[0]) import tifffile project = 'tifffile' copyright = '2008-2025, Christoph Gohlke' author = 'Christoph Gohlke' version = tifffile.__version__ extensions = [ 'sphinx.ext.napoleon', 'sphinx.ext.autodoc', 'sphinx.ext.autosummary', 'sphinx.ext.doctest', # 'sphinxcontrib.spelling', # 'sphinx.ext.viewcode', # 'sphinx.ext.autosectionlabel', # 'numpydoc', # 'sphinx_issues', ] templates_path = ['_templates'] exclude_patterns = [] html_theme = 'alabaster' html_static_path = ['_static'] html_css_files = ['custom.css'] html_show_sourcelink = False autodoc_member_order = 'bysource' # bysource, groupwise autodoc_default_flags = ['members'] autodoc_typehints = 'description' autodoc_type_aliases = {'ArrayLike': 'numpy.ArrayLike'} autoclass_content = 'class' autosectionlabel_prefix_document = True autosummary_generate = True napoleon_google_docstring = True napoleon_numpy_docstring = False ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1691864984.0 tifffile-2025.3.30/docs/make.py0000666000000000000000000000707414465747630013017 0ustar00# tifffile/docs/make.py """Make documentation for tifffile package using Sphinx.""" import os import sys from sphinx.cmd.build import main here = os.path.dirname(__file__) sys.path.insert(0, os.path.split(here)[0]) path = os.environ.get('PATH') if path: os.environ['PATH'] = os.path.join(sys.exec_prefix, 'Scripts') + ';' + path import tifffile # noqa members = [ 'imread', 'imwrite', 'memmap', 'TiffWriter', 'TiffFile', # 'TiffFileError', 'TiffFormat', 'TiffPage', 'TiffFrame', 'TiffPages', 'TiffTag', 'TiffTags', 'TiffTagRegistry', 'TiffPageSeries', 'TiffSequence', 'FileSequence', 'ZarrStore', 'ZarrTiffStore', 'ZarrFileSequenceStore', # Constants 'DATATYPE', 'SAMPLEFORMAT', 'PLANARCONFIG', 'COMPRESSION', 'PREDICTOR', 'EXTRASAMPLE', 'FILETYPE', 'PHOTOMETRIC', 'RESUNIT', 'CHUNKMODE', 'TIFF', # classes 'FileHandle', 'OmeXml', # 'OmeXmlError', 'Timer', 'NullContext', 'StoredShape', 'TiledSequence', # functions 'logger', 'repeat_nd', 'natural_sorted', 'parse_filenames', 'matlabstr2py', 'strptime', 'imagej_metadata_tag', 'imagej_description', # 'read_scanimage_metadata', 'read_micromanager_metadata', 'read_ndtiff_index', 'create_output', 'hexdump', 'xml2dict', 'tiffcomment', 'tiff2fsspec', 'lsm2bin', 'validate_jhove', 'imshow', '.geodb', ] title = f'tifffile {tifffile.__version__}' underline = '=' * len(title) memberlist = '\n '.join(m.replace('.', '').lower() for m in members if m) with open(here + '/index.rst', 'w') as fh: fh.write( f""".. tifffile documentation .. currentmodule:: tifffile {title} {underline} .. automodule:: tifffile .. toctree:: :hidden: :maxdepth: 2 genindex license revisions examples .. toctree:: :hidden: :maxdepth: 2 {memberlist} """ ) with open(here + '/genindex.rst', 'w') as fh: fh.write( """ Index ===== """ ) with open(here + '/license.rst', 'w') as fh: fh.write( """ License ======= .. include:: ../LICENSE """ ) with open(here + '/examples.rst', 'w') as fh: fh.write( """ Examples ======== See `#examples `_. """ ) with open(here + '/revisions.rst', 'w') as fh: fh.write(""".. include:: ../CHANGES.rst""") with open('tiff.rst', 'w') as fh: fh.write( """ .. currentmodule:: tifffile TIFF ==== .. autoclass:: tifffile.TIFF :members: .. autoclass:: tifffile._TIFF :members: """ ) automodule = """.. currentmodule:: tifffile {name} {size} .. automodule:: tifffile.{name} :members: """ autoclass = """.. currentmodule:: tifffile {name} {size} .. autoclass:: tifffile.{name} :members: """ automethod = """.. currentmodule:: tifffile {name} {size} .. autofunction:: {name} """ for name in members: if not name or name == 'TIFF': continue if name[0] == '.': template = automodule name = name[1:] elif name[0].isupper(): template = autoclass else: template = automethod size = '=' * len(name) with open(f'{here}/{name.lower()}.rst', 'w') as fh: fh.write(template.format(name=name, size=size)) main(['-b', 'html', here, here + '/html']) os.system('start html/index.html') ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1743283853.7997153 tifffile-2025.3.30/examples/0000777000000000000000000000000014772063216012376 5ustar00././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1739924907.0 tifffile-2025.3.30/examples/earthbigdata.py0000666000000000000000000003562614755222653015406 0ustar00# tifffile/examples/earthbigdata.py # Copyright (c) 2021-2025, Christoph Gohlke # All rights reserved. # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions are met: # # 1. Redistributions of source code must retain the above copyright notice, # this list of conditions and the following disclaimer. # # 2. Redistributions in binary form must reproduce the above copyright notice, # this list of conditions and the following disclaimer in the documentation # and/or other materials provided with the distribution. # # 3. Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived from # this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" # AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE # IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE # ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE # LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR # CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF # SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS # INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN # CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) # ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE # POSSIBILITY OF SUCH DAMAGE. # This file uses VSCode Jupyter-like code cells # https://code.visualstudio.com/docs/python/jupyter-support-py # %% [markdown] """ # Create a fsspec ReferenceFileSystem for a large set of remote GeoTIFF files by [Christoph Gohlke](https://www.cgohlke.com) Published Oct 9, 2021. Last updated Feb 18, 2025. This Python script uses the [tifffile](https://github.com/cgohlke/tifffile) and [imagecodecs](https://github.com/cgohlke/imagecodecs) packages to create a [fsspec ReferenceFileSystem](https://github.com/fsspec/kerchunk) file in JSON format for the [Earthbigdata]( http://sentinel-1-global-coherence-earthbigdata.s3-website-us-west-2.amazonaws.com ) set, which consists of 1,033,422 GeoTIFF files stored on AWS. The ReferenceFileSystem is used to create a multi-dimensional Xarray dataset. Refer to the discussion at [kerchunk/issues/78]( https://github.com/fsspec/kerchunk/issues/78). """ # %% import base64 import os import fsspec import imagecodecs.numcodecs import matplotlib.pyplot import numcodecs import tifffile import xarray import zarr # < 3 assert zarr.__version__[0] == '2' # %% [markdown] """ ## Get a list of all remote TIFF files Call the AWS command line app to recursively list all files in the Earthbigdata set. Cache the output in a local file. Filter the list for TIFF files and remove the common path. """ # %% if not os.path.exists('earthbigdata.txt'): os.system( 'aws s3 ls sentinel-1-global-coherence-earthbigdata/data/tiles' ' --recursive > earthbigdata.txt' ) with open('earthbigdata.txt', encoding='utf-8') as fh: tiff_files = [ line.split()[-1][11:] for line in fh.readlines() if '.tif' in line ] print('Number of TIFF files:', len(tiff_files)) # %% [markdown] """ ## Define metadata to describe the dataset Define labels, coordinate arrays, file name regular expression patterns, and categories for all dimensions in the Earthbigdata set. """ # %% baseurl = ( 'https://' 'sentinel-1-global-coherence-earthbigdata.s3.us-west-2.amazonaws.com' '/data/tiles/' ) chunkshape = (1200, 1200) fillvalue = 0 latitude_label = 'latitude' latitude_pattern = rf'(?P<{latitude_label}>[NS]\d+)' latitude_coordinates = [ (j * -0.00083333333 - 0.000416666665 + i) for i in range(82, -79, -1) for j in range(1200) ] latitude_category = {} i = 0 for j in range(82, -1, -1): latitude_category[f'N{j:-02}'] = i i += 1 for j in range(1, 79): latitude_category[f'S{j:-02}'] = i i += 1 longitude_label = 'longitude' longitude_pattern = rf'(?P<{longitude_label}>[EW]\d+)' longitude_coordinates = [ (j * 0.00083333333 + 0.000416666665 + i) for i in range(-180, 180) for j in range(1200) ] longitude_category = {} i = 0 for j in range(180, 0, -1): longitude_category[f'W{j:-03}'] = i i += 1 for j in range(180): longitude_category[f'E{j:-03}'] = i i += 1 season_label = 'season' season_category = {'winter': 0, 'spring': 1, 'summer': 2, 'fall': 3} season_coordinates = list(season_category.keys()) season_pattern = rf'_(?P<{season_label}>{"|".join(season_category)})' polarization_label = 'polarization' polarization_category = {'vv': 0, 'vh': 1, 'hv': 2, 'hh': 3} polarization_coordinates = list(polarization_category.keys()) polarization_pattern = ( rf'_(?P<{polarization_label}>{"|".join(polarization_category)})' ) coherence_label = 'coherence' coherence_category = { '06': 0, '12': 1, '18': 2, '24': 3, '36': 4, '48': 5, } coherence_coordinates = list(int(i) for i in coherence_category.keys()) coherence_pattern = ( rf'_COH(?P<{coherence_label}>{"|".join(coherence_category)})' ) orbit_label = 'orbit' orbit_coordinates = list(range(1, 176)) orbit_pattern = rf'_(?P<{orbit_label}>\d+)' flightdirection_label = 'flightdirection' flightdirection_category = {'A': 0, 'D': 1} flightdirection_coordinates = list(flightdirection_category.keys()) flightdirection_pattern = ( rf'(?P<{flightdirection_label}>[{"|".join(flightdirection_category)}])_' ) # %% [markdown] """ ## Open a file for writing the fsspec ReferenceFileSystem in JSON format """ # %% jsonfile = open('earthbigdata.json', 'w', encoding='utf-8', newline='\n') # %% [markdown] """ ## Write the coordinate arrays Add the coordinate arrays to a Zarr group, convert it to a fsspec ReferenceFileSystem JSON string, and write it to the open file. """ # %% coordinates = {} # type: ignore[var-annotated] zarrgroup = zarr.open_group(coordinates) zarrgroup.array( longitude_label, data=longitude_coordinates, dtype='float32', # compression='zlib', ).attrs['_ARRAY_DIMENSIONS'] = [longitude_label] zarrgroup.array( latitude_label, data=latitude_coordinates, dtype='float32', # compression='zlib', ).attrs['_ARRAY_DIMENSIONS'] = [latitude_label] zarrgroup.array( season_label, data=season_coordinates, dtype=object, object_codec=numcodecs.VLenUTF8(), compression=None, ).attrs['_ARRAY_DIMENSIONS'] = [season_label] zarrgroup.array( polarization_label, data=polarization_coordinates, dtype=object, object_codec=numcodecs.VLenUTF8(), compression=None, ).attrs['_ARRAY_DIMENSIONS'] = [polarization_label] zarrgroup.array( coherence_label, data=coherence_coordinates, dtype='uint8', compression=None, ).attrs['_ARRAY_DIMENSIONS'] = [coherence_label] zarrgroup.array(orbit_label, data=orbit_coordinates, dtype='int32').attrs[ '_ARRAY_DIMENSIONS' ] = [orbit_label] zarrgroup.array( flightdirection_label, data=flightdirection_coordinates, dtype=object, object_codec=numcodecs.VLenUTF8(), compression=None, ).attrs['_ARRAY_DIMENSIONS'] = [flightdirection_label] # base64 encode any values containing non-ascii characters for k, v in coordinates.items(): try: coordinates[k] = v.decode() except UnicodeDecodeError: coordinates[k] = 'base64:' + base64.b64encode(v).decode() coordinates_json = tifffile.ZarrStore._json(coordinates).decode() jsonfile.write(coordinates_json[:-2]) # skip the last newline and brace # %% [markdown] """ ## Create a TiffSequence from a list of file names Filter the list of GeoTIFF files for files containing coherence 'COH' data. The regular expression pattern and categories are used to parse the file names for chunk indices. Note: the created TiffSequence cannot be used to access any files. The file names do not refer to existing files. The `baseurl` is later used to get the real location of the files. """ # %% mode = 'COH' fileseq = tifffile.TiffSequence( [file for file in tiff_files if '_' + mode in file], pattern=( latitude_pattern + longitude_pattern + season_pattern + polarization_pattern + coherence_pattern ), categories={ latitude_label: latitude_category, longitude_label: longitude_category, season_label: season_category, polarization_label: polarization_category, coherence_label: coherence_category, }, ) assert len(fileseq) == 444821 assert fileseq.files_missing == 5119339 assert fileseq.shape == (161, 360, 4, 4, 6) assert fileseq.dims == ( 'latitude', 'longitude', 'season', 'polarization', 'coherence', ) print(fileseq) # %% [markdown] """ ## Create a ZarrTiffStore from the TiffSequence Define `axestiled` to tile the latitude and longitude dimensions of the TiffSequence with the first and second image/chunk dimensions. Define extra `zattrs` to create a Xarray compatible store. """ # %% store = fileseq.aszarr( chunkdtype='uint8', chunkshape=chunkshape, fillvalue=fillvalue, axestiled={0: 0, 1: 1}, zattrs={ '_ARRAY_DIMENSIONS': [ season_label, polarization_label, coherence_label, latitude_label, longitude_label, ] }, ) print(store) # %% [markdown] """ ## Append the ZarrTiffStore to the open ReferenceFileSystem file Use the mode name to create a Zarr subgroup. Use the `imagecodecs_tiff` Numcodecs compatible codec for decoding TIFF files. """ # %% store.write_fsspec( jsonfile, baseurl, groupname=mode, codec_id='imagecodecs_tiff', _append=True, _close=False, ) # %% [markdown] """ ## Repeat for the other modes Repeat the `TiffSequence->aszarr->write_fsspec` workflow for the other modes. """ # %% for mode in ( 'AMP', 'tau', 'rmse', 'rho', ): fileseq = tifffile.TiffSequence( [file for file in tiff_files if '_' + mode in file], pattern=( latitude_pattern + longitude_pattern + season_pattern + polarization_pattern ), categories={ latitude_label: latitude_category, longitude_label: longitude_category, season_label: season_category, polarization_label: polarization_category, }, ) print(fileseq) with fileseq.aszarr( chunkdtype='uint16', chunkshape=chunkshape, fillvalue=fillvalue, axestiled={0: 0, 1: 1}, zattrs={ '_ARRAY_DIMENSIONS': [ season_label, polarization_label, latitude_label, longitude_label, ] }, ) as store: print(store) store.write_fsspec( jsonfile, baseurl, groupname=mode, codec_id='imagecodecs_tiff', _append=True, _close=False, ) for mode in ('inc', 'lsmap'): fileseq = tifffile.TiffSequence( [file for file in tiff_files if '_' + mode in file], pattern=( latitude_pattern + longitude_pattern + orbit_pattern + flightdirection_pattern ), categories={ latitude_label: latitude_category, longitude_label: longitude_category, # orbit has no category flightdirection_label: flightdirection_category, }, ) print(fileseq) with fileseq.aszarr( chunkdtype='uint8', chunkshape=chunkshape, fillvalue=fillvalue, axestiled={0: 0, 1: 1}, zattrs={ '_ARRAY_DIMENSIONS': [ orbit_label, flightdirection_label, latitude_label, longitude_label, ] }, ) as store: print(store) store.write_fsspec( jsonfile, baseurl, groupname=mode, codec_id='imagecodecs_tiff', _append=True, _close=mode == 'lsmap', # close after last store ) # %% [markdown] """ ## Close the JSON file """ # %% jsonfile.close() # %% [markdown] """ ## Use the fsspec ReferenceFileSystem file to create a Xarray dataset Register `imagecodecs.numcodecs` before using the ReferenceFileSystem. """ # %% imagecodecs.numcodecs.register_codecs() # %% [markdown] """ ### Create a fsspec mapper instance from the ReferenceFileSystem file Specify the `target_protocol` to load a local file. """ # %% mapper = fsspec.get_mapper( 'reference://', fo='earthbigdata.json', target_protocol='file', remote_protocol='https', ) # %% [markdown] """ ### Create a Xarray dataset from the mapper Use `mask_and_scale` to disable conversion to floating point. """ # %% dataset = xarray.open_dataset( mapper, engine='zarr', mask_and_scale=False, backend_kwargs={'consolidated': False}, ) print(dataset) # %% [markdown] """ ### Select the Southern California region in the dataset """ # %% socal = dataset.sel(latitude=slice(36, 32.5), longitude=slice(-121, -115)) print(socal) # %% [markdown] """ ### Plot a selection of the dataset The few GeoTIFF files comprising the selection are transparently downloaded, decoded, and stitched to an in-memory NumPy array and plotted using Matplotlib. """ # %% image = socal['COH'].loc['winter', 'vv', 12] assert image[100, 100] == 53 xarray.plot.imshow(image, size=6, aspect=1.8) matplotlib.pyplot.show() # %% [markdown] """ ## System information Print information about the software used to run this script. """ # %% def system_info() -> str: """Print information about Python environment.""" import datetime import sys import fsspec import imagecodecs import matplotlib import numcodecs import numpy import tifffile import xarray import zarr return '\n'.join( ( sys.executable, f'Python {sys.version}', '', f'numpy {numpy.__version__}', f'matplotlib {matplotlib.__version__}', f'tifffile {tifffile.__version__}', f'imagecodecs {imagecodecs.__version__}', f'numcodecs {numcodecs.__version__}', f'fsspec {fsspec.__version__}', f'xarray {xarray.__version__}', f'zarr {zarr.__version__}', '', str(datetime.datetime.now()), ) ) print(system_info()) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1706464974.0 tifffile-2025.3.30/examples/issue125.py0000666000000000000000000000475614555513316014344 0ustar00# tifffile/examples/issues125.py """Create a Fsspec ReferenceFileSystem for a sequence of TIFF files on S3 This Python script uses the Tifffile and Fsspec libraries to create a multiscale ReferenceFileSystem JSON file for a sequence of cloud optimized GeoTIFF (COG) files stored on S3. The tiles of the COG files are used as chunks. No additional Numcodecs codec needs to be registered since the COG files use Zlib compression. A Xarray dataset is created from the ReferenceFileSystem file and a subset of the dataset is plotted. See https://github.com/cgohlke/tifffile/issues/125 """ import fsspec import tifffile import xarray from matplotlib import pyplot # get a list of cloud optimized GeoTIFF files stored on S3 remote_options = { 'anon': True, 'client_kwargs': {'endpoint_url': 'https://mghp.osn.xsede.org'}, } fs = fsspec.filesystem('s3', **remote_options) files = [f's3://{f}' for f in fs.ls('/rsignellbucket1/lcmap/cog')] # write the ReferenceFileSystem of each file to a JSON file with open('issue125.json', 'w', encoding='utf-8', newline='\n') as jsonfile: for i, filename in enumerate(tifffile.natural_sorted(files)): url, name = filename.rsplit('/', 1) with fs.open(filename) as fh: with tifffile.TiffFile(fh, name=name) as tif: print(tif.geotiff_metadata) with tif.series[0].aszarr() as store: store.write_fsspec( jsonfile, url=url, # using an experimental API: _shape=[len(files)], # shape of file sequence _axes='T', # axes of file sequence _index=[i], # index of this file in sequence _append=i != 0, # if True, only write index keys+value _close=i == len(files) - 1, # if True, no more appends # groupname='0', # required for non-pyramidal series ) # create a fsspec mapper instance from the ReferenceFileSystem file mapper = fsspec.get_mapper( 'reference://', fo='issue125.json', target_protocol='file', remote_protocol='s3', remote_options=remote_options, ) # create a xarray dataset from the mapper dataset = xarray.open_zarr(mapper, consolidated=False) print(dataset) # plot a slice of the 5th pyramidal level of the dataset xarray.plot.imshow(dataset['5'][0, 32:-32, 32:-32]) pyplot.show() ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1743283853.8097856 tifffile-2025.3.30/setup.cfg0000666000000000000000000000005214772063216012376 0ustar00[egg_info] tag_build = tag_date = 0 ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1742619210.0 tifffile-2025.3.30/setup.py0000666000000000000000000001126714767441112012300 0ustar00# tifffile/setup.py """Tifffile package Setuptools script.""" import re import sys from setuptools import setup buildnumber = '' def search(pattern: str, string: str, flags: int = 0) -> str: """Return first match of pattern in string.""" match = re.search(pattern, string, flags) if match is None: raise ValueError(f'{pattern!r} not found') return match.groups()[0] def fix_docstring_examples(docstring: str) -> str: """Return docstring with examples fixed for GitHub.""" start = True indent = False lines = ['..', ' This file is generated by setup.py', ''] for line in docstring.splitlines(): if not line.strip(): start = True indent = False if line.startswith('>>> '): indent = True if start: lines.extend(['.. code-block:: python', '']) start = False lines.append((' ' if indent else '') + line) return '\n'.join(lines) with open('tifffile/tifffile.py', encoding='utf-8') as fh: code = fh.read().replace('\r\n', '\n').replace('\r', '\n') version = search(r"__version__ = '(.*?)'", code).replace('.x.x', '.dev0') version += ('.' + buildnumber) if buildnumber else '' description = search(r'"""(.*)\.(?:\r\n|\r|\n)', code) readme = search( r'(?:\r\n|\r|\n){2}r"""(.*)"""(?:\r\n|\r|\n){2}from __future__', code, re.MULTILINE | re.DOTALL, ) readme = '\n'.join( [description, '=' * len(description)] + readme.splitlines()[1:] ) if 'sdist' in sys.argv: # update README, LICENSE, and CHANGES files with open('README.rst', 'w', encoding='utf-8') as fh: fh.write(fix_docstring_examples(readme)) license = search( r'(# Copyright.*?(?:\r\n|\r|\n))(?:\r\n|\r|\n)+r""', code, re.MULTILINE | re.DOTALL, ) license = license.replace('# ', '').replace('#', '') with open('LICENSE', 'w', encoding='utf-8') as fh: fh.write('BSD 3-Clause License\n\n') fh.write(license) revisions = search( r'(?:\r\n|\r|\n){2}(Revisions.*)- …', readme, re.MULTILINE | re.DOTALL, ).strip() with open('CHANGES.rst', encoding='utf-8') as fh: old = fh.read() old = old.split(revisions.splitlines()[-1])[-1] with open('CHANGES.rst', 'w', encoding='utf-8') as fh: fh.write(revisions.strip()) fh.write(old) setup( name='tifffile', version=version, license='BSD-3-Clause', description=description, long_description=readme, long_description_content_type='text/x-rst', author='Christoph Gohlke', author_email='cgohlke@cgohlke.com', url='https://www.cgohlke.com', project_urls={ 'Bug Tracker': 'https://github.com/cgohlke/tifffile/issues', 'Source Code': 'https://github.com/cgohlke/tifffile', # 'Documentation': 'https://', }, packages=['tifffile'], package_data={'tifffile': ['py.typed']}, python_requires='>=3.10', install_requires=[ 'numpy', # 'imagecodecs>=2023.8.12', ], extras_require={ 'codecs': ['imagecodecs>=2024.12.30'], 'xml': ['defusedxml', 'lxml'], 'zarr': ['zarr<3', 'fsspec'], 'plot': ['matplotlib'], 'all': [ 'imagecodecs>=2024.12.30', 'matplotlib', 'defusedxml', 'lxml', 'zarr<3', 'fsspec', ], 'test': [ 'pytest', 'imagecodecs', 'czifile', 'cmapfile', 'oiffile', 'lfdfiles', 'psdtags', 'roifile', 'lxml', 'zarr<3', 'dask', 'xarray', 'fsspec', 'defusedxml', 'ndtiff', ], }, entry_points={ 'console_scripts': [ 'tifffile = tifffile:main', 'tiffcomment = tifffile.tiffcomment:main', 'tiff2fsspec = tifffile.tiff2fsspec:main', 'lsm2bin = tifffile.lsm2bin:main', ], # 'napari.plugin': ['tifffile = tifffile.napari_tifffile'], }, platforms=['any'], classifiers=[ 'Development Status :: 4 - Beta', 'Intended Audience :: Science/Research', 'Intended Audience :: Developers', 'Operating System :: OS Independent', 'Programming Language :: Python :: 3 :: Only', 'Programming Language :: Python :: 3.10', 'Programming Language :: Python :: 3.11', 'Programming Language :: Python :: 3.12', 'Programming Language :: Python :: 3.13', ], ) ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1743283853.7997153 tifffile-2025.3.30/tests/0000777000000000000000000000000014772063216011722 5ustar00././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1726765159.0 tifffile-2025.3.30/tests/conftest.py0000666000000000000000000000322214673054147014123 0ustar00# tifffile/tests/conftest.py from __future__ import annotations import os import sys from typing import Any if os.environ.get('VSCODE_CWD'): # work around pytest not using PYTHONPATH in VSCode sys.path.insert( 0, os.path.abspath(os.path.join(os.path.dirname(__file__), '..')) ) if os.environ.get('SKIP_CODECS', None): sys.modules['imagecodecs'] = None # type: ignore[assignment] def pytest_report_header(config: Any, start_path: Any) -> Any: try: from numpy import __version__ as numpy from test_tifffile import config from tifffile import __version__ as tifffile try: from imagecodecs import __version__ as imagecodecs except ImportError: imagecodecs = 'N/A' try: from zarr import __version__ as zarr except ImportError: zarr = 'N/A' try: from dask import __version__ as dask except ImportError: dask = 'N/A' try: from xarray import __version__ as xarray except ImportError: xarray = 'N/A' try: from fsspec import __version__ as fsspec except ImportError: fsspec = 'N/A' return ( f'versions: tifffile-{tifffile}, ' f'imagecodecs-{imagecodecs}, ' f'numpy-{numpy}, ' f'zarr-{zarr}, ' f'dask-{dask}, ' f'xarray-{xarray}, ' f'fsspec-{fsspec}\n' f'test config: {config()}' ) except Exception: pass collect_ignore = ['_tmp', 'data', 'data-'] ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1743283571.0 tifffile-2025.3.30/tests/test_tifffile.py0000666000000000000000000301704114772062563015135 0ustar00# test_tifffile.py # Copyright (c) 2008-2025, Christoph Gohlke # All rights reserved. # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions are met: # # 1. Redistributions of source code must retain the above copyright notice, # this list of conditions and the following disclaimer. # # 2. Redistributions in binary form must reproduce the above copyright notice, # this list of conditions and the following disclaimer in the documentation # and/or other materials provided with the distribution. # # 3. Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived from # this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" # AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE # IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE # ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE # LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR # CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF # SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS # INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN # CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) # ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE # POSSIBILITY OF SUCH DAMAGE. # mypy: allow-untyped-defs # mypy: check-untyped-defs=False """Unittests for the tifffile package. Public data files can be requested from the author. Private data files are not available due to size and copyright restrictions. :Version: 2025.3.30 """ from __future__ import annotations import binascii import datetime import glob import json import logging import math import mmap import os import pathlib import random import re import struct import sys import tempfile import urllib.error import urllib.request from io import BytesIO from typing import TYPE_CHECKING if TYPE_CHECKING: from typing import Any from types import ModuleType from numpy.typing import ArrayLike, DTypeLike, NDArray import numpy import pytest import tifffile from numpy.testing import ( assert_allclose, assert_array_almost_equal, assert_array_equal, assert_raises, ) try: from tifffile import * # noqa: F403 STAR_IMPORTED = ( TIFF, # noqa: F405 imwrite, # noqa imread, # noqa imshow, # noqa TiffWriter, # noqa TiffReader, # noqa TiffFile, # noqa TiffFileError, # noqa TiffSequence, # noqa TiffPage, # noqa TiffFrame, # noqa FileHandle, # noqa FileSequence, # noqa Timer, # noqa logger, # noqa strptime, # noqa natural_sorted, # noqa stripnull, # noqa memmap, # noqa repeat_nd, # noqa format_size, # noqa product, # noqa create_output, # noqa askopenfilename, # noqa read_scanimage_metadata, # noqa read_micromanager_metadata, # noqa OmeXmlError, # noqa OmeXml, # noqa ) # type: tuple[object, ...] except NameError: STAR_IMPORTED = () from tifffile import ( # noqa: F401 CHUNKMODE, COMPRESSION, DATATYPE, EXTRASAMPLE, FILETYPE, FILLORDER, OFILETYPE, ORIENTATION, PHOTOMETRIC, PLANARCONFIG, PREDICTOR, RESUNIT, SAMPLEFORMAT, TIFF, FileCache, FileHandle, FileSequence, OmeXml, OmeXmlError, TiffFile, TiffFileError, TiffFrame, TiffPage, TiffPageSeries, TiffReader, TiffSequence, TiffTag, TiffTags, TiffWriter, Timer, ZarrFileSequenceStore, ZarrStore, ZarrTiffStore, ) from tifffile.tifffile import ( # noqa: F401 apply_colormap, asbool, askopenfilename, astrotiff_description_metadata, byteorder_compare, byteorder_isnative, bytes2str, check_shape, create_output, enumarg, epics_datetime, excel_datetime, fluoview_description_metadata, format_size, hexdump, imagej_description, imagej_description_metadata, imagej_shape, imread, imshow, imwrite, julian_datetime, logger, lsm2bin, matlabstr2py, memmap, metaseries_description_metadata, natural_sorted, order_axes, parse_filenames, pformat, pilatus_description_metadata, product, read_micromanager_metadata, read_scanimage_metadata, reorient, repeat_nd, reshape_axes, reshape_nd, scanimage_artist_metadata, scanimage_description_metadata, sequence, shaped_description, shaped_description_metadata, snipstr, squeeze_axes, stk_description_metadata, stripascii, stripnull, strptime, subresolution, svs_description_metadata, tiff2fsspec, tiffcomment, transpose_axes, unpack_rgb, validate_jhove, xml2dict, ) HERE = os.path.dirname(__file__) TEMP_DIR = os.path.join(HERE, '_tmp') PRIVATE_DIR = os.path.join(HERE, 'data', 'private') PUBLIC_DIR = os.path.join(HERE, 'data', 'public') IS_BE = sys.byteorder == 'big' IS_PYPY = 'pypy' in sys.version.lower() IS_WIN = sys.platform == 'win32' IS_CG = os.environ.get('COMPUTERNAME', '').startswith('CG-K') def skip(key: str, default: bool) -> bool: """Return if environment variable is set and true.""" return os.getenv(key, default) in {True, 1, '1'} # skip tests requiring large memory SKIP_LARGE = skip('SKIP_LARGE', sys.maxsize < 2**32) SKIP_EXTENDED = skip('SKIP_EXTENDED', False) # skip public files SKIP_PUBLIC = skip('SKIP_PUBLIC', not os.path.exists(PUBLIC_DIR)) # skip private files SKIP_PRIVATE = skip('SKIP_PRIVATE', not os.path.exists(PRIVATE_DIR)) # skip validate written files with jhove SKIP_VALIDATE = skip('SKIP_VALIDATE', True) SKIP_CODECS = skip('SKIP_CODECS', False) SKIP_ZARR = skip('SKIP_ZARR', False) SKIP_DASK = skip('SKIP_DASK', False) SKIP_NDTIFF = skip('SKIP_NDTIFF', False) SKIP_HTTP = skip('SKIP_HTTP', not IS_CG) REASON = 'skipped' FILE_FLAGS = ['is_' + a for a in TIFF.FILE_FLAGS] FILE_FLAGS += [name for name in dir(TiffFile) if name.startswith('is_')] PAGE_FLAGS = [name for name in dir(TiffPage) if name.startswith('is_')] URL = 'http://localhost:8386/' # TEMP_DIR if not SKIP_HTTP: try: urllib.request.urlopen(URL + '/test/test.txt', timeout=0.5) except (urllib.error.URLError, TimeoutError): SKIP_HTTP = True if not os.path.exists(TEMP_DIR): TEMP_DIR = tempfile.gettempdir() if not SKIP_CODECS: try: import imagecodecs SKIP_CODECS = False except ImportError: SKIP_CODECS = True zarr: ModuleType | None fsspec: ModuleType | None if IS_PYPY: SKIP_ZARR = True SKIP_DASK = True SKIP_HTTP = True if SKIP_ZARR: zarr = None else: try: import fsspec # type: ignore[no-redef] import zarr # type: ignore[no-redef] except ImportError: zarr = None fsspec = None SKIP_ZARR = True dask: ModuleType | None if SKIP_DASK: dask = None else: try: import dask import dask.array except ImportError: dask = None SKIP_DASK = True ndtiff: ModuleType | None if SKIP_NDTIFF: ndtiff = None else: try: import ndtiff # type: ignore[no-redef] except ImportError: ndtiff = None SKIP_NDTIFF = True def config() -> str: """Return test configuration.""" this = sys.modules[__name__] return ' | '.join( a for a in dir(this) if a.startswith('SKIP_') and getattr(this, a) ) def _data_file( pathname: str, base: str, expand: bool = True ) -> str | list[str]: """Return path to test file(s).""" path = os.path.join(base, *pathname.split('/')) if expand and any(i in path for i in '*?'): return glob.glob(path) return path def private_file(pathname: str, base: str = PRIVATE_DIR) -> str: """Return path to private test file(s).""" files = _data_file(pathname, base, expand=False) assert isinstance(files, str) return files def private_files(pathname: str, base: str = PRIVATE_DIR) -> list[str]: """Return path to private test file(s).""" files = _data_file(pathname, base, expand=True) assert isinstance(files, list) return files def public_file(pathname: str, base: str = PUBLIC_DIR) -> str | list[str]: """Return path to public test file(s).""" files = _data_file(pathname, base, expand=False) assert isinstance(files, str) return files def public_files(pathname: str, base: str = PUBLIC_DIR) -> list[str]: """Return path to public test file(s).""" files = _data_file(pathname, base, expand=True) assert isinstance(files, list) return files def random_data(dtype: DTypeLike, shape: tuple[int, ...]) -> NDArray[Any]: """Return random numpy array.""" # TODO: use nd noise dtype = numpy.dtype(dtype) if dtype.char == '?': return numpy.random.rand(*shape) < 0.5 data = numpy.random.rand(*shape) * 255 data = data.astype(dtype) return data def assert_file_flags(tiff_file: TiffFile) -> None: """Access all flags of TiffFile.""" for flag in FILE_FLAGS: getattr(tiff_file, flag) def assert_page_flags(tiff_page: TiffPage) -> None: """Access all flags of TiffPage.""" for flag in PAGE_FLAGS: getattr(tiff_page, flag) def assert__str__(tif: TiffFile, detail: int = 3) -> None: """Call TiffFile._str and __repr__ functions.""" for i in range(detail + 1): tif._str(detail=i) repr(tif) str(tif) repr(tif.pages) str(tif.pages) if len(tif.pages) > 0: page = tif.pages.first repr(page) str(page) str(page.tags) page.flags page.name page.dims page.sizes page.coords repr(tif.series) str(tif.series) if len(tif.series) > 0: series = tif.series[0] repr(series) str(series) def assert__repr__(obj: object) -> None: """Call object's __repr__ and __str__ function.""" repr(obj) str(obj) def assert_valid_omexml(omexml: str) -> None: """Validate OME-XML schema.""" if not SKIP_HTTP: OmeXml.validate(omexml, assert_=True) def assert_valid_tiff(filename: str, *args: Any, **kwargs: Any) -> None: """Validate TIFF file using jhove script.""" if SKIP_VALIDATE: return validate_jhove(filename, 'jhove.cmd', *args, **kwargs) def assert_decode_method( page: TiffPage, image: ArrayLike | None = None ) -> None: """Call TiffPage.decode on all segments and compare to TiffPage.asarray.""" fh = page.parent.filehandle if page.is_tiled: offsets = page.tags['TileOffsets'].value bytecounts = page.tags['TileByteCounts'].value else: offsets = page.tags['StripOffsets'].value bytecounts = page.tags['StripByteCounts'].value if image is None: image = page.asarray() else: image = numpy.asarray(image) for i, (o, b) in enumerate(zip(offsets, bytecounts)): fh.seek(o) strile, index, shape = page.decode(fh.read(b), i) assert strile is not None assert image.reshape(page.shaped)[index] == strile[0, 0, 0, 0] def assert_aszarr_method( obj: TiffFile | TiffPage, image: ArrayLike | None = None, chunkmode: int | None = None, **kwargs: Any, ) -> None: """Assert aszarr returns same data as asarray.""" if SKIP_ZARR or zarr is None: return if image is None: image = obj.asarray(**kwargs) with obj.aszarr(chunkmode=chunkmode, **kwargs) as store: data = zarr.open(store, mode='r') if isinstance(data, zarr.Group): data = data[0] assert_array_equal(data, image) del data class TempFileName: """Temporary file name context manager.""" name: str remove: bool def __init__( self, name: str | None = None, ext: str = '.tif', remove: bool = False ) -> None: self.remove = remove or TEMP_DIR == tempfile.gettempdir() if not name: fh = tempfile.NamedTemporaryFile(prefix='test_') self.name = fh.named fh.close() else: self.name = os.path.join(TEMP_DIR, f'test_{name}{ext}') def __enter__(self) -> str: return self.name def __exit__(self, exc_type: Any, exc_value: Any, traceback: Any) -> None: if self.remove: try: os.remove(self.name) except Exception: pass numpy.set_printoptions(suppress=True, precision=5) ############################################################################### # Tests for specific issues def test_issue_star_import(): """Test from tifffile import *.""" assert len(STAR_IMPORTED) > 0 assert lsm2bin not in STAR_IMPORTED @pytest.mark.skipif(__doc__ is None, reason='__doc__ is None') def test_issue_version_mismatch(): """Test 'tifffile.__version__' matches docstrings.""" ver = ':Version: ' + tifffile.__version__ assert ver in __doc__ assert ver in tifffile.__doc__ def test_issue_deprecated_import(): """Test deprecated functions can still be imported.""" with pytest.raises(ImportError): from tifffile import imsave def test_issue_imread_kwargs(): """Test that is_flags are handled by imread.""" data = random_data(numpy.uint16, (5, 63, 95)) with TempFileName('issue_imread_kwargs') as fname: with TiffWriter(fname) as tif: for image in data: tif.write(image) # create 5 series assert_valid_tiff(fname) image = imread(fname, pattern=None) # reads first series assert_array_equal(image, data[0]) image = imread(fname, is_shaped=False) # reads all pages assert_array_equal(image, data) def test_issue_imread_kwargs_legacy(): """Test legacy arguments no longer work as of 2022.4.22. Specifying 'fastij', 'movie', 'multifile', 'multifile_close', or 'pages' raises TypeError. Specifying 'key' and 'pages' raises TypeError. Specifying 'pages' in TiffFile constructor raises TypeError. """ data = random_data(numpy.uint8, (3, 21, 31)) with TempFileName('issue_imread_kwargs_legacy') as fname: imwrite(fname, data, photometric=PHOTOMETRIC.MINISBLACK) with pytest.raises(TypeError): imread(fname, fastij=True) with pytest.raises(TypeError): imread(fname, movie=True) with pytest.raises(TypeError): imread(fname, multifile=True) with pytest.raises(TypeError): imread(fname, multifile_close=True) with pytest.raises(TypeError): TiffFile(fname, fastij=True) with pytest.raises(TypeError): TiffFile(fname, multifile=True) with pytest.raises(TypeError): TiffFile(fname, multifile_close=True) with pytest.raises(TypeError): imread(fname, key=0, pages=[1, 2]) with pytest.raises(TypeError): TiffFile(fname, pages=[1, 2]) @pytest.mark.skipif(SKIP_PRIVATE, reason=REASON) def test_issue_infinite_loop(): """Test infinite loop reading more than two tags of same code in IFD.""" # Reported by D. Hughes on 2019.7.26 # the test file is corrupted but should not cause infinite loop fname = private_file('gdk-pixbuf/bug784903-overflow-dimensions.tiff') with TiffFile(fname) as tif: page = tif.pages.first assert page.compression == 0 # invalid assert__str__(tif) @pytest.mark.skipif( SKIP_PRIVATE or SKIP_CODECS or not imagecodecs.JPEG.available, reason=REASON, ) def test_issue_jpeg_ia(): """Test JPEG compressed intensity image with alpha channel.""" # no extrasamples! fname = private_file('issues/jpeg_ia.tiff') with TiffFile(fname) as tif: page = tif.pages.first assert page.compression == COMPRESSION.JPEG assert_array_equal( page.asarray(), numpy.array([[[0, 0], [255, 255]]], dtype=numpy.uint8), ) assert__str__(tif) @pytest.mark.skipif( SKIP_PRIVATE or SKIP_CODECS or not imagecodecs.JPEG.available, reason=REASON, ) def test_issue_jpeg_palette(): """Test invalid JPEG compressed intensity image with palette.""" # https://forum.image.sc/t/viv-and-avivator/45999/24 fname = private_file('issues/FL_cells.ome.tif') with TiffFile(fname) as tif: page = tif.pages.first assert page.compression == COMPRESSION.JPEG assert page.colormap is not None data = tif.asarray() assert data.shape == (4, 1024, 1024) assert data.dtype == numpy.uint8 assert data[2, 512, 512] == 10 assert_aszarr_method(tif, data) assert__str__(tif) def test_issue_specific_pages(): """Test read second page.""" data = random_data(numpy.uint8, (3, 21, 31)) with TempFileName('issue_specific_pages') as fname: imwrite(fname, data, photometric=PHOTOMETRIC.MINISBLACK) image = imread(fname) assert image.shape == (3, 21, 31) # UserWarning: can not reshape (21, 31) to (3, 21, 31) image = imread(fname, key=1) assert image.shape == (21, 31) assert_array_equal(image, data[1]) with TempFileName('issue_specific_pages_bigtiff') as fname: imwrite(fname, data, bigtiff=True, photometric=PHOTOMETRIC.MINISBLACK) image = imread(fname) assert image.shape == (3, 21, 31) # UserWarning: can not reshape (21, 31) to (3, 21, 31) image = imread(fname, key=1) assert image.shape == (21, 31) assert_array_equal(image, data[1]) @pytest.mark.skipif(SKIP_PUBLIC, reason=REASON) def test_issue_circular_ifd(caplog): """Test circular IFD logs error but is still readable.""" fname = public_file('Tiff-Library-4J/IFD struct/Circular E.tif') with TiffFile(fname) as tif: assert len(tif.pages) == 2 assert 'invalid circular reference' in caplog.text image = tif.asarray() assert image.shape == (2, 1500, 2000, 3) assert image[1, 1499, 1999, 2] == 110 @pytest.mark.skipif(SKIP_PRIVATE, reason=REASON) def test_issue_bad_description(caplog): """Test page.description is empty when ImageDescription is not ASCII.""" # ImageDescription is not ASCII but bytes fname = private_file('stk/cells in the eye2.stk') with TiffFile(fname) as tif: page = tif.pages.first assert page.description == '' assert__str__(tif) assert 'coercing invalid ASCII to bytes' in caplog.text @pytest.mark.skipif(SKIP_PRIVATE, reason=REASON) def test_issue_bad_ascii(caplog): """Test coerce invalid ASCII to bytes.""" # ImageID is not ASCII but bytes # https://github.com/blink1073/tifffile/pull/38 fname = private_file('issues/tifffile_013_tagfail.tif') with TiffFile(fname) as tif: tags = tif.pages.first.tags assert tags['ImageID'].value[-8:] == b'rev 2893' assert__str__(tif) assert 'coercing invalid ASCII to bytes' in caplog.text def test_issue_sampleformat(): """Test write correct number of SampleFormat values.""" # https://github.com/ngageoint/geopackage-tiff-java/issues/5 data = random_data(numpy.int16, (256, 256, 4)) with TempFileName('issue_sampleformat') as fname: imwrite(fname, data, photometric=PHOTOMETRIC.RGB) with TiffFile(fname) as tif: tags = tif.pages.first.tags assert tags['SampleFormat'].value == (2, 2, 2, 2) assert tags['ExtraSamples'].value == (2,) assert__str__(tif) def test_issue_sampleformat_default(): """Test SampleFormat are not written for UINT.""" data = random_data(numpy.uint8, (256, 256, 4)) with TempFileName('issue_sampleformat_default') as fname: imwrite(fname, data, photometric=PHOTOMETRIC.RGB) with TiffFile(fname) as tif: tags = tif.pages.first.tags 'SampleFormat' not in tags assert tags['ExtraSamples'].value == (2,) assert__str__(tif) @pytest.mark.skipif(SKIP_PRIVATE or SKIP_CODECS, reason=REASON) def test_issue_palette_with_extrasamples(): """Test read palette with extra samples.""" # https://github.com/python-pillow/Pillow/issues/1597 fname = private_file('issues/palette_with_extrasamples.tif') with TiffFile(fname) as tif: assert len(tif.pages) == 1 page = tif.pages.first assert page.photometric == PHOTOMETRIC.PALETTE assert page.compression == COMPRESSION.LZW assert page.imagewidth == 518 assert page.imagelength == 556 assert page.bitspersample == 8 assert page.samplesperpixel == 2 # assert data image = page.asrgb() assert image.shape == (556, 518, 3) assert image.dtype == numpy.uint16 image = tif.asarray() # self.assertEqual(image.shape[-3:], (556, 518, 2)) assert image.shape == (556, 518, 2) assert image.dtype == numpy.uint8 assert_aszarr_method(tif, image) del image assert__str__(tif) @pytest.mark.skipif(SKIP_PRIVATE or SKIP_CODECS, reason=REASON) def test_issue_incorrect_rowsperstrip_count(): """Test read incorrect count for rowsperstrip; bitspersample = 4.""" # https://github.com/python-pillow/Pillow/issues/1544 fname = private_file('bad/incorrect_count.tiff') with TiffFile(fname) as tif: assert len(tif.pages) == 1 page = tif.pages.first assert page.photometric == PHOTOMETRIC.PALETTE assert page.compression == COMPRESSION.ADOBE_DEFLATE assert page.imagewidth == 32 assert page.imagelength == 32 assert page.bitspersample == 4 assert page.samplesperpixel == 1 assert page.rowsperstrip == 32 assert page.dataoffsets[0] == 8 assert page.databytecounts[0] == 89 # assert data image = page.asrgb() assert image.shape == (32, 32, 3) assert_aszarr_method(page) del image assert__str__(tif) @pytest.mark.skipif(SKIP_PRIVATE, reason=REASON) def test_issue_extra_strips(caplog): """Test read extra strips.""" # https://github.com/opencv/opencv/issues/17054 fname = private_file('issues/extra_strips.tif') with TiffFile(fname) as tif: assert not tif.is_bigtiff assert len(tif.pages) == 1 page = tif.pages.first assert page.tags['StripOffsets'].value == (8, 0, 0) assert page.tags['StripByteCounts'].value == (55064448, 0, 0) assert page.dataoffsets[0] == 8 assert page.databytecounts[0] == 55064448 assert page.is_contiguous # assert data image = tif.asarray() assert image.shape == (2712, 3384, 3) assert_aszarr_method(page, image) assert 'incorrect StripOffsets count' in caplog.text assert 'incorrect StripByteCounts count' in caplog.text @pytest.mark.skipif(SKIP_PRIVATE, reason=REASON) def test_issue_no_bytecounts(caplog): """Test read no bytecounts.""" fname = private_file('bad/img2_corrupt.tif') with TiffFile(fname) as tif: assert not tif.is_bigtiff assert len(tif.pages) == 1 page = tif.pages.first assert page.is_contiguous assert page.planarconfig == PLANARCONFIG.CONTIG assert page.dataoffsets[0] == 512 assert page.databytecounts[0] == 0 # assert data image = tif.asarray() assert image.shape == (800, 1200) # fails: assert_aszarr_method(tif, image) assert 'invalid value offset 0' in caplog.text assert 'invalid data type 31073' in caplog.text assert 'invalid page offset 808333686' in caplog.text @pytest.mark.skipif(SKIP_PRIVATE or SKIP_CODECS, reason=REASON) def test_issue_missing_eoi_in_strips(): """Test read LZW strips without EOI.""" # 256x256 uint16, lzw, imagej # Strips do not contain an EOI code as required by the TIFF spec. # File generated by `tiffcp -c lzw Z*.tif stack.tif` from # Bars-G10-P15.zip # Failed with "series 0 failed: string size must be a multiple of # element size" # Reported by Kai Wohlfahrt on 3/7/2014 fname = private_file('issues/stack.tif') with TiffFile(fname) as tif: assert tif.is_imagej assert tif.byteorder == '<' assert len(tif.pages) == 128 assert len(tif.series) == 1 # assert page properties page = tif.pages.first assert page.imagewidth == 256 assert page.imagelength == 256 assert page.bitspersample == 16 # assert series properties series = tif.series[0] assert series.shape == (128, 256, 256) assert series.dtype == numpy.uint16 assert series.axes == 'IYX' # assert ImageJ tags ijmeta = tif.imagej_metadata assert ijmeta is not None assert ijmeta['ImageJ'] == '1.41e' # assert data data = tif.asarray() assert data.shape == (128, 256, 256) assert data.dtype == numpy.uint16 assert data[64, 128, 128] == 19226 assert_aszarr_method(tif, data) del data assert__str__(tif) @pytest.mark.skipif(SKIP_PRIVATE, reason=REASON) def test_issue_imagej_grascalemode(): """Test read ImageJ grayscale mode RGB image.""" # https://github.com/cgohlke/tifffile/issues/6 fname = private_file('issues/hela-cells.tif') with TiffFile(fname) as tif: assert tif.is_imagej assert tif.byteorder == '>' assert len(tif.pages) == 1 assert len(tif.series) == 1 # assert page properties page = tif.pages.first assert page.photometric == PHOTOMETRIC.RGB assert page.imagewidth == 672 assert page.imagelength == 512 assert page.bitspersample == 16 assert page.is_contiguous # assert series properties series = tif.series[0] assert series.shape == (512, 672, 3) assert series.dtype == numpy.uint16 assert series.axes == 'YXS' # assert ImageJ tags ijmeta = tif.imagej_metadata assert ijmeta is not None assert ijmeta['ImageJ'] == '1.52p' assert ijmeta['channels'] == 3 # assert data data = tif.asarray() assert isinstance(data, numpy.ndarray) assert data.shape == (512, 672, 3) assert data.dtype == numpy.uint16 assert tuple(data[255, 336]) == (440, 378, 298) assert_aszarr_method(tif, data) assert__str__(tif) @pytest.mark.parametrize('byteorder', ['>', '<']) def test_issue_valueoffset(byteorder): """Test read TiffTag.valueoffsets.""" unpack = struct.unpack data = random_data(byteorder + 'u2', (2, 19, 31)) software = 'test_tifffile' bo = {'>': 'be', '<': 'le'}[byteorder] with TempFileName(f'issue_valueoffset_{bo}') as fname: imwrite( fname, data, software=software, photometric=PHOTOMETRIC.MINISBLACK, extratags=[(65535, 3, 2, (21, 22), True)], ) with TiffFile(fname, _useframes=True) as tif: with open(fname, 'rb') as fh: page = tif.pages.first # inline value fh.seek(page.tags['ImageLength'].valueoffset) assert ( page.imagelength == unpack(tif.byteorder + 'I', fh.read(4))[0] ) # two inline values fh.seek(page.tags[65535].valueoffset) assert unpack(tif.byteorder + 'H', fh.read(2))[0] == 21 # separate value fh.seek(page.tags['Software'].valueoffset) assert page.software == bytes2str(fh.read(13)) # TiffFrame page = tif.pages[1].aspage() fh.seek(page.tags['StripOffsets'].valueoffset) assert ( page.dataoffsets[0] == unpack(tif.byteorder + 'I', fh.read(4))[0] ) tag = page.tags['ImageLength'] assert tag.name == 'ImageLength' assert tag.dtype_name == 'LONG' assert tag.dataformat == '1I' assert tag.valuebytecount == 4 @pytest.mark.skipif(SKIP_PUBLIC, reason=REASON) def test_issue_pages_number(): """Test number of pages.""" fname = public_file('tifffile/100000_pages.tif') with TiffFile(fname) as tif: assert len(tif.pages) == 100000 assert__str__(tif, 0) def test_issue_pages_iterator(): """Test iterate over pages in series.""" data = random_data(numpy.int8, (8, 219, 301)) with TempFileName('issue_page_iterator') as fname: imwrite(fname, data[0]) imwrite( fname, data, photometric=PHOTOMETRIC.MINISBLACK, append=True, metadata={'axes': 'ZYX'}, ) imwrite(fname, data[-1], append=True) with TiffFile(fname) as tif: assert len(tif.pages) == 10 assert len(tif.series) == 3 page = tif.pages[1] assert isinstance(page, TiffPage) assert page.is_contiguous assert page.photometric == PHOTOMETRIC.MINISBLACK assert page.imagewidth == 301 assert page.imagelength == 219 assert page.samplesperpixel == 1 # test read series 1 series = tif.series[1] assert len(series._pages) == 1 assert len(series.pages) == 8 image = series.asarray() assert_array_equal(data, image) for i, page in enumerate(series.pages): assert page is not None im = page.asarray() assert_array_equal(image[i], im) assert__str__(tif) def test_issue_tile_partial(): """Test write single tiles larger than image data.""" # https://github.com/cgohlke/tifffile/issues/3 data = random_data(numpy.uint8, (3, 15, 15, 15)) with TempFileName('issue_tile_partial_2d') as fname: imwrite(fname, data[0, 0], tile=(16, 16)) with TiffFile(fname) as tif: assert len(tif.pages) == 1 page = tif.pages.first assert not page.is_contiguous assert page.is_tiled assert ( page.tags['TileOffsets'].value[0] + page.tags['TileByteCounts'].value[0] == tif.filehandle.size ) assert_array_equal(page.asarray(), data[0, 0]) assert_aszarr_method(page, data[0, 0]) assert__str__(tif) with TempFileName('issue_tile_partial_3d') as fname: imwrite(fname, data[0], tile=(16, 16, 16)) with TiffFile(fname) as tif: assert len(tif.pages) == 1 page = tif.pages.first assert not page.is_contiguous assert page.is_tiled assert page.is_volumetric assert ( page.tags['TileOffsets'].value[0] + page.tags['TileByteCounts'].value[0] == tif.filehandle.size ) assert_array_equal(page.asarray(), data[0]) assert_aszarr_method(page, data[0]) assert__str__(tif) with TempFileName('issue_tile_partial_3d_separate') as fname: imwrite( fname, data, tile=(16, 16, 16), planarconfig=PLANARCONFIG.SEPARATE, photometric=PHOTOMETRIC.RGB, ) with TiffFile(fname) as tif: assert len(tif.pages) == 1 page = tif.pages.first assert not page.is_contiguous assert page.is_tiled assert ( page.tags['TileOffsets'].value[0] + page.tags['TileByteCounts'].value[0] * 3 == tif.filehandle.size ) assert_array_equal(page.asarray(), data) assert_aszarr_method(page, data) assert__str__(tif) # test complete tile is contiguous data = random_data(numpy.uint8, (16, 16)) with TempFileName('issue_tile_partial_not') as fname: imwrite(fname, data, tile=(16, 16)) with TiffFile(fname) as tif: assert len(tif.pages) == 1 page = tif.pages.first assert page.is_contiguous assert page.is_memmappable assert page.is_tiled assert ( page.tags['TileOffsets'].value[0] + page.tags['TileByteCounts'].value[0] == tif.filehandle.size ) assert_array_equal(page.asarray(), data) assert_aszarr_method(page, data) assert__str__(tif) @pytest.mark.parametrize('compression', [1, 8]) @pytest.mark.parametrize('samples', [1, 3]) def test_issue_tiles_pad(samples, compression): """Test tiles from iterator get padded.""" # https://github.com/cgohlke/tifffile/issues/38 if samples == 3: data = numpy.random.randint(0, 2**12, (31, 33, 3), numpy.uint16) photometric = 'rgb' else: data = numpy.random.randint(0, 2**12, (31, 33), numpy.uint16) photometric = None def tiles(data, tileshape, pad=False): for y in range(0, data.shape[0], tileshape[0]): for x in range(0, data.shape[1], tileshape[1]): tile = data[y : y + tileshape[0], x : x + tileshape[1]] if pad and tile.shape != tileshape: tile = numpy.pad( tile, ( (0, tileshape[0] - tile.shape[0]), (0, tileshape[1] - tile.shape[1]), ), ) yield tile with TempFileName( f'issue_issue_tiles_pad_{compression}{samples}' ) as fname: imwrite( fname, tiles(data, (16, 16)), dtype=data.dtype, shape=data.shape, tile=(16, 16), photometric=photometric, compression=compression, ) assert_array_equal(imread(fname), data) assert_valid_tiff(fname) def test_issue_fcontiguous(): """Test write F-contiguous arrays.""" # https://github.com/cgohlke/tifffile/issues/24 data = numpy.asarray(random_data(numpy.uint8, (31, 33)), order='F') with TempFileName('issue_fcontiguous') as fname: imwrite(fname, data, compression=COMPRESSION.ADOBE_DEFLATE) with TiffFile(fname) as tif: assert len(tif.pages) == 1 page = tif.pages.first assert_array_equal(page.asarray(), data) assert__str__(tif) def test_issue_pathlib(): """Test support for pathlib.Path.""" data = random_data(numpy.uint16, (219, 301)) with TempFileName('issue_pathlib') as fname: fname = pathlib.Path(fname) assert isinstance(fname, os.PathLike) # imwrite imwrite(fname, data) # imread im = imread(fname) assert_array_equal(im, data) # memmap im = memmap(fname) try: assert_array_equal(im, data) finally: del im # TiffFile with TiffFile(fname) as tif: with TempFileName('issue_pathlib_out') as outfname: outfname = pathlib.Path(outfname) # out=file im = tif.asarray(out=outfname) try: assert isinstance(im, numpy.memmap) assert_array_equal(im, data) assert os.path.samefile(im.filename, str(outfname)) finally: del im # TiffSequence with TiffSequence(fname) as tifs: im = tifs.asarray() assert_array_equal(im[0], data) with TiffSequence([fname]) as tifs: im = tifs.asarray() assert_array_equal(im[0], data) # TiffSequence container if SKIP_PRIVATE or SKIP_CODECS: pytest.skip(REASON) fname = pathlib.Path(private_file('TiffSequence.zip')) with TiffSequence('*.tif', container=fname, pattern=None) as tifs: im = tifs.asarray() assert im[9, 256, 256] == 135 @pytest.mark.skipif(SKIP_PRIVATE or SKIP_CODECS, reason=REASON) def test_issue_lzw_corrupt(): """Test decode corrupted LZW segment raises RuntimeError.""" # reported by S Richter on 2020.2.17 fname = private_file('issues/lzw_corrupt.tiff') with pytest.raises(RuntimeError): with TiffFile(fname) as tif: tif.asarray() def test_issue_iterable_compression(): """Test write iterable of pages with compression.""" # https://github.com/cgohlke/tifffile/issues/20 data = numpy.random.rand(10, 10, 10) * 127 data = data.astype(numpy.int8) with TempFileName('issue_iterable_compression') as fname: with TiffWriter(fname) as tif: tif.write(data, shape=(10, 10, 10), dtype=numpy.int8) tif.write( data, shape=(10, 10, 10), dtype=numpy.int8, compression=COMPRESSION.ADOBE_DEFLATE, ) with TiffFile(fname) as tif: assert_array_equal(tif.series[0].asarray(), data) assert_array_equal(tif.series[1].asarray(), data) # fail with wrong dtype with TempFileName('issue_iterable_compression_fail') as fname: with TiffWriter(fname) as tif: with pytest.raises(ValueError): tif.write(data, shape=(10, 10, 10), dtype=numpy.uint8) with TiffWriter(fname) as tif: with pytest.raises(ValueError): tif.write( data, shape=(10, 10, 10), dtype=numpy.uint8, compression=COMPRESSION.ADOBE_DEFLATE, ) def test_issue_write_separated(): """Test write SEPARATED colorspace.""" # https://github.com/cgohlke/tifffile/issues/37 contig = random_data(numpy.uint8, (63, 95, 4)) separate = random_data(numpy.uint8, (4, 63, 95)) extrasample = random_data(numpy.uint8, (63, 95, 5)) with TempFileName('issue_write_separated') as fname: with TiffWriter(fname) as tif: tif.write(contig, photometric=PHOTOMETRIC.SEPARATED) tif.write(separate, photometric=PHOTOMETRIC.SEPARATED) tif.write( extrasample, photometric=PHOTOMETRIC.SEPARATED, extrasamples=[1], ) assert_valid_tiff(fname) with TiffFile(fname) as tif: assert len(tif.pages) == 3 assert len(tif.series) == 3 page = tif.pages.first assert page.photometric == PHOTOMETRIC.SEPARATED assert_array_equal(page.asarray(), contig) page = tif.pages[1] assert page.photometric == PHOTOMETRIC.SEPARATED assert_array_equal(page.asarray(), separate) page = tif.pages[2] assert page.photometric == PHOTOMETRIC.SEPARATED assert page.extrasamples == (1,) assert_array_equal(page.asarray(), extrasample) @pytest.mark.skipif(SKIP_PRIVATE, reason=REASON) def test_issue_mmap(): """Test read from mmap object with no readinto function..""" fname = public_file('OME/bioformats-artificial/4D-series.ome.tiff') with open(fname, 'rb') as fh: mm = mmap.mmap(fh.fileno(), 0, access=mmap.ACCESS_READ) assert_array_equal(imread(mm), imread(fname)) mm.close() @pytest.mark.skipif(SKIP_PRIVATE, reason=REASON) def test_issue_micromanager(caplog): """Test fallback to ImageJ metadata if OME series fails.""" # https://github.com/cgohlke/tifffile/issues/54 # https://forum.image.sc/t/47567/9 # OME-XML does not contain reference to master file # file has corrupted MicroManager DisplaySettings metadata fname = private_file( 'OME/' 'image_stack_tpzc_50tp_2p_5z_3c_512k_1_MMStack_2-Pos001_000.ome.tif' ) with TiffFile( fname, is_mmstack=False, is_ndtiff=False, is_mdgel=False ) as tif: assert len(tif.pages) == 750 with caplog.at_level(logging.DEBUG): assert len(tif.series) == 1 assert 'OME series is BinaryOnly' in caplog.text assert tif.is_micromanager assert tif.is_ome assert tif.is_imagej assert tif.micromanager_metadata is not None assert 'DisplaySettings' not in tif.micromanager_metadata assert 'failed to read display settings' not in caplog.text series = tif.series[0] assert series.shape == (50, 5, 3, 256, 256) @pytest.mark.skipif(IS_PYPY, reason=REASON) def test_issue_pickle(): """Test that TIFF constants are picklable.""" # https://github.com/cgohlke/tifffile/issues/64 from pickle import dumps, loads with pytest.raises(AttributeError): assert loads(dumps(TIFF)).CHUNKMODE.PLANE == TIFF.CHUNKMODE.PLANE assert loads(dumps(TIFF.CHUNKMODE)).PLANE == TIFF.CHUNKMODE.PLANE assert loads(dumps(TIFF.CHUNKMODE.PLANE)) == TIFF.CHUNKMODE.PLANE def test_issue_imagej_singlet_dimensions(): """Test that ImageJ files can be read preserving singlet dimensions.""" # https://github.com/cgohlke/tifffile/issues/19 # https://github.com/cgohlke/tifffile/issues/66 data = numpy.random.randint(0, 2**8, (1, 10, 1, 248, 260, 1), numpy.uint8) with TempFileName('issue_imagej_singlet_dimensions') as fname: imwrite(fname, data, imagej=True) image = imread(fname, squeeze=False) assert_array_equal(image, data) with TiffFile(fname) as tif: assert tif.is_imagej series = tif.series[0] assert series.axes == 'ZYX' assert series.shape == (10, 248, 260) assert series.get_axes(squeeze=False) == 'TZCYXS' assert series.get_shape(squeeze=False) == (1, 10, 1, 248, 260, 1) data = tif.asarray(squeeze=False) assert_array_equal(image, data) assert_aszarr_method(series, data, squeeze=False) assert_aszarr_method(series, data, squeeze=False, chunkmode='page') @pytest.mark.skipif( SKIP_PRIVATE or SKIP_CODECS or not imagecodecs.JPEG.available, reason=REASON, ) def test_issue_cr2_ojpeg(): """Test read OJPEG image from CR2.""" # https://github.com/cgohlke/tifffile/issues/75 fname = private_file('CanonCR2/Canon - EOS M6 - RAW (3 2).cr2') with TiffFile(fname) as tif: assert len(tif.pages) == 4 page = tif.pages.first assert page.compression == 6 assert page.shape == (4000, 6000, 3) assert page.dtype == numpy.uint8 assert page.photometric == PHOTOMETRIC.YCBCR assert page.compression == COMPRESSION.OJPEG data = page.asarray() assert data.shape == (4000, 6000, 3) assert data.dtype == numpy.uint8 assert tuple(data[1640, 2372]) == (71, 75, 58) assert_aszarr_method(page, data) page = tif.pages[1] assert page.shape == (120, 160, 3) assert page.dtype == numpy.uint8 assert page.photometric == PHOTOMETRIC.YCBCR assert page.compression == COMPRESSION.OJPEG data = page.asarray() assert tuple(data[60, 80]) == (124, 144, 107) assert_aszarr_method(page, data) page = tif.pages[2] assert page.shape == (400, 600, 3) assert page.dtype == numpy.uint16 assert page.photometric == PHOTOMETRIC.RGB assert page.compression == COMPRESSION.NONE data = page.asarray() assert tuple(data[200, 300]) == (1648, 2340, 1348) assert_aszarr_method(page, data) page = tif.pages[3] assert page.shape == (4056, 3144, 2) assert page.dtype == numpy.uint16 assert page.photometric == PHOTOMETRIC.MINISWHITE assert page.compression == COMPRESSION.OJPEG # SOF3 data = page.asarray() assert tuple(data[2000, 1500]) == (1759, 2467) assert_aszarr_method(page, data) @pytest.mark.skipif( SKIP_PRIVATE or SKIP_CODECS or not imagecodecs.JPEG.available, reason=REASON, ) def test_issue_ojpeg_preview(): """Test read JPEGInterchangeFormat from RAW image.""" # https://github.com/cgohlke/tifffile/issues/93 fname = private_file('RAW/RAW_NIKON_D3X.NEF') with TiffFile(fname) as tif: assert len(tif.pages) == 1 page = tif.pages.first assert page.compression == COMPRESSION.NONE assert page.shape == (120, 160, 3) assert page.dtype == numpy.uint8 assert page.photometric == PHOTOMETRIC.RGB data = page.asarray() assert data.shape == (120, 160, 3) assert data.dtype == numpy.uint8 assert tuple(data[60, 80]) == (180, 167, 159) assert_aszarr_method(page, data) page = tif.pages.first.pages[0] assert page.shape == (4032, 6048, 3) assert page.dtype == numpy.uint8 assert page.photometric == COMPRESSION.OJPEG data = page.asarray() assert tuple(data[60, 80]) == (67, 13, 11) assert_aszarr_method(page, data) page = tif.pages.first.pages[1] assert page.shape == (4044, 6080) assert page.bitspersample == 14 assert page.photometric == PHOTOMETRIC.CFA assert page.compression == COMPRESSION.NIKON_NEF with pytest.raises(ValueError): data = page.asarray() @pytest.mark.skipif( SKIP_PRIVATE or SKIP_CODECS or not imagecodecs.JPEG.available, reason=REASON, ) def test_issue_arw(caplog): """Test read Sony ARW RAW image.""" # https://github.com/cgohlke/tifffile/issues/95 fname = private_file('RAW/A1_full_lossless_compressed.ARW') with TiffFile(fname) as tif: assert len(tif.pages) == 3 assert len(tif.series) == 4 page = tif.pages.first assert page.compression == COMPRESSION.OJPEG assert page.photometric == PHOTOMETRIC.YCBCR assert page.shape == (1080, 1616, 3) assert page.dtype == numpy.uint8 data = page.asarray() assert data.shape == (1080, 1616, 3) assert data.dtype == numpy.uint8 assert tuple(data[60, 80]) == (122, 119, 104) assert_aszarr_method(page, data) assert tif.pages.first.pages is not None page = tif.pages.first.pages[0] assert page.is_tiled assert page.compression == COMPRESSION.JPEG assert page.photometric == PHOTOMETRIC.CFA assert page.bitspersample == 14 assert page.tags['SonyRawFileType'].value == 4 assert page.tags['CFARepeatPatternDim'].value == (2, 2) assert page.tags['CFAPattern'].value == b'\0\1\1\2' assert page.shape == (6144, 8704) assert page.dtype == numpy.uint16 data = page.asarray() assert 'SonyRawFileType' in caplog.text assert data[60, 80] == 1000 # might not be correct according to #95 assert_aszarr_method(page, data) page = tif.pages[1] assert page.compression == COMPRESSION.OJPEG assert page.photometric == PHOTOMETRIC.YCBCR assert page.shape == (120, 160, 3) assert page.dtype == numpy.uint8 data = page.asarray() assert tuple(data[60, 80]) == (56, 54, 29) assert_aszarr_method(page, data) page = tif.pages[2] assert page.compression == COMPRESSION.JPEG assert page.photometric == PHOTOMETRIC.YCBCR assert page.shape == (5760, 8640, 3) assert page.dtype == numpy.uint8 data = page.asarray() assert tuple(data[60, 80]) == (243, 238, 218) assert_aszarr_method(page, data) def test_issue_rational_rounding(): """Test rational are rounded to 64-bit.""" # https://github.com/cgohlke/tifffile/issues/81 data = numpy.array([[255]]) with TempFileName('issue_rational_rounding') as fname: imwrite(fname, data, resolution=(7411.824413635355, 7411.824413635355)) with TiffFile(fname) as tif: tags = tif.pages.first.tags assert tags['XResolution'].value == (4294967295, 579475) assert tags['YResolution'].value == (4294967295, 579475) def test_issue_omexml_micron(): """Test OME-TIFF can be created with micron character in XML.""" # https://forum.image.sc/t/micro-character-in-omexml-from-python/53578/4 with TempFileName('issue_omexml_micron', ext='.ome.tif') as fname: imwrite( fname, [[0]], metadata={'PhysicalSizeX': 1.0, 'PhysicalSizeXUnit': 'µm'}, ) with TiffFile(fname) as tif: assert tif.is_ome assert ( 'PhysicalSizeXUnit="µm"' in tif.pages.first.tags['ImageDescription'].value ) def test_issue_svs_doubleheader(): """Test svs_description_metadata for SVS with double header.""" # https://github.com/cgohlke/tifffile/pull/88 assert svs_description_metadata( 'Aperio Image Library v11.2.1\r\n' '2220x2967 -> 574x768 - ;Aperio Image Library v10.0.51\r\n' '46920x33014 [0,100 46000x32914] (256x256) JPEG/RGB Q=30' '|AppMag = 20|StripeWidth = 2040|ScanScope ID = CPAPERIOCS' '|Filename = CMU-1|Date = 12/29/09|Time = 09:59:15' '|User = b414003d-95c6-48b0-9369-8010ed517ba7|Parmset = USM Filter' '|MPP = 0.4990|Left = 25.691574|Top = 23.449873' '|LineCameraSkew = -0.000424|LineAreaXOffset = 0.019265' '|LineAreaYOffset = -0.000313|Focus Offset = 0.000000' '|ImageID = 1004486|OriginalWidth = 46920|Originalheight = 33014' '|Filtered = 5|OriginalWidth = 46000|OriginalHeight = 32914' ) == { 'Header': ( 'Aperio Image Library v11.2.1\r\n' '2220x2967 -> 574x768 - ;Aperio Image Library v10.0.51\r\n' '46920x33014 [0,100 46000x32914] (256x256) JPEG/RGB Q=30' ), 'AppMag': 20, 'StripeWidth': 2040, 'ScanScope ID': 'CPAPERIOCS', 'Filename': 'CMU-1', 'Date': '12/29/09', 'Time': '09:59:15', 'User': 'b414003d-95c6-48b0-9369-8010ed517ba7', 'Parmset': 'USM Filter', 'MPP': 0.499, 'Left': 25.691574, 'Top': 23.449873, 'LineCameraSkew': -0.000424, 'LineAreaXOffset': 0.019265, 'LineAreaYOffset': -0.000313, 'Focus Offset': 0.0, 'ImageID': 1004486, 'OriginalWidth': 46000, 'Originalheight': 33014, 'Filtered': 5, 'OriginalHeight': 32914, } @pytest.mark.skipif(SKIP_PRIVATE or SKIP_CODECS, reason=REASON) def test_issue_packbits_dtype(): """Test read and efficiently write PackBits compressed int16 image.""" # https://github.com/blink1073/tifffile/issues/61 # requires imagecodecs > 2021.6.8 fname = private_file('packbits/imstack_packbits-int16.tif') with TiffFile(fname) as tif: assert len(tif.pages) == 519 page = tif.pages[181] assert page.compression == COMPRESSION.PACKBITS assert page.photometric == PHOTOMETRIC.MINISBLACK assert page.shape == (348, 185) assert page.dtype == numpy.int16 data = page.asarray() assert data.shape == (348, 185) assert data.dtype == numpy.int16 assert data[184, 72] == 24 assert_aszarr_method(page, data) data = tif.asarray() assert_aszarr_method(tif, data) buf = BytesIO() imwrite(buf, data, compression='packbits') assert buf.seek(0, 2) < 1700000 # efficiently compressed buf.seek(0) with TiffFile(buf) as tif: assert len(tif.pages) == 519 page = tif.pages[181] assert page.compression == COMPRESSION.PACKBITS assert page.photometric == PHOTOMETRIC.MINISBLACK assert page.shape == (348, 185) assert page.dtype == numpy.int16 data = page.asarray() assert data.shape == (348, 185) assert data.dtype == numpy.int16 assert data[184, 72] == 24 assert_aszarr_method(page, data) data = tif.asarray() assert_aszarr_method(tif, data) @pytest.mark.skipif(SKIP_PRIVATE or SKIP_CODECS, reason=REASON) def test_issue_predictor_byteorder(): """Test read big-endian uint32 RGB with horizontal predictor.""" fname = private_file('issues/flower-rgb-contig-32_msb_zip_predictor.tiff') with TiffFile(fname) as tif: assert tif.tiff.byteorder == '>' assert len(tif.pages) == 1 page = tif.pages.first assert page.compression == COMPRESSION.ADOBE_DEFLATE assert page.photometric == PHOTOMETRIC.RGB assert page.predictor == PREDICTOR.HORIZONTAL assert page.shape == (43, 73, 3) assert page.dtype == numpy.uint32 data = page.asarray() assert data.shape == (43, 73, 3) assert data.dtype == numpy.uint32 assert tuple(data[30, 2]) == (0, 246337650, 191165795) assert data.dtype.byteorder == '=' assert_aszarr_method(page, data) data = tif.asarray() assert_aszarr_method(tif, data) @pytest.mark.skipif(SKIP_ZARR or SKIP_DASK, reason=REASON) @pytest.mark.parametrize('truncate', [False, True]) @pytest.mark.parametrize('chunkmode', [0, 2]) def test_issue_dask_multipage(truncate, chunkmode): """Test multi-threaded access of memory-mapable, multi-page Zarr stores.""" # https://github.com/cgohlke/tifffile/issues/67#issuecomment-908529425 data = numpy.arange(5 * 99 * 101, dtype=numpy.uint16).reshape((5, 99, 101)) with TempFileName( f'issue_dask_multipage_{int(truncate)}_{chunkmode}' ) as fname: kwargs = {'truncate': truncate} if not truncate: kwargs['tile'] = (32, 32) imwrite(fname, data, **kwargs) with imread(fname, aszarr=True, chunkmode=chunkmode) as store: daskarray = dask.array.from_zarr(store).compute() assert_array_equal(data, daskarray) @pytest.mark.skipif(SKIP_PRIVATE or SKIP_ZARR, reason=REASON) @pytest.mark.parametrize('chunkmode', [0, 2]) def test_issue_zarr_store_closed(chunkmode): """Test Zarr store filehandle is open when reading from unloaded pages.""" # https://github.com/cgohlke/tifffile/issues/67#issuecomment-2246367891 fname = private_file('ImageJ/_malaria_parasites.tif') data = imread(fname) store = imread(fname, aszarr=True, chunkmode=chunkmode) try: z = zarr.open(store, mode='r') chunk = z[10:11, 3:-3] # seek of closed file finally: store.close() assert_array_equal(chunk, data[10:11, 3:-3]) store = imread(fname, aszarr=True, chunkmode=chunkmode) try: z = zarr.open(store, mode='r') chunk = z[:] finally: store.close() assert_array_equal(chunk, data) @pytest.mark.skipif(SKIP_PRIVATE or SKIP_ZARR, reason=REASON) @pytest.mark.parametrize('chunkmode', [0, 2]) def test_issue_zarr_store_multifile_closed(chunkmode): """Test Zarr store can read from closed files.""" fname = private_file('OME/tubhiswt-4D-lzw/tubhiswt_C0_T0.ome.tif') data = imread(fname) assert data.shape == (43, 10, 2, 512, 512) store = imread(fname, aszarr=True, chunkmode=chunkmode) try: z = zarr.open(store, mode='r') chunk = z[32:35, 3:7, :, 31:-31, 33:-33] # seek of closed file finally: store.close() assert_array_equal(chunk, data[32:35, 3:7, :, 31:-31, 33:-33]) @pytest.mark.skipif( SKIP_PRIVATE or SKIP_CODECS or not imagecodecs.LZW.available, reason=REASON ) def test_issue_read_from_closed_file(): """Test read from closed file handles.""" fname = private_file('OME/tubhiswt-4D-lzw/tubhiswt_C0_T0.ome.tif') with tifffile.TiffFile(fname) as tif: count = 0 for frame in tif.series[0].pages[:10]: # most file handles are closed if frame is None: continue isclosed = frame.parent.filehandle.closed if not isclosed: continue count += 1 if isinstance(frame, TiffFrame): with pytest.warns(UserWarning): page = frame.aspage() # re-load frame as page assert isclosed == page.parent.filehandle.closed else: page = frame with pytest.warns(UserWarning): page.colormap # delay load tag value assert isclosed == page.parent.filehandle.closed with pytest.warns(UserWarning): frame.asarray() # read data assert isclosed == page.parent.filehandle.closed assert count > 0 @pytest.mark.skipif( SKIP_PRIVATE or SKIP_CODECS or not imagecodecs.PNG.available, reason=REASON ) def test_issue_filesequence_categories(caplog): """Test FileSequence with categories.""" # https://github.com/cgohlke/tifffile/issues/76 with tifffile.FileSequence( imagecodecs.imread, private_files('dataset-A1-20200531/*.png'), pattern=( r'(?P.{2})-' r'(?P.+)-\d{8}T\d{6}-PSII0-' r'(?P\d)' ), categories={'sampleid': {'A1': 0, 'B1': 1}, 'experiment': {'doi': 0}}, ) as pngs: with pytest.warns(DeprecationWarning): assert len(pngs.files) == 2 assert len(pngs) == 2 assert pngs.files_missing == 2 assert pngs.shape == (2, 1, 2) assert pngs.dims == ('sampleid', 'experiment', 'frameid') data = pngs.asarray() assert data.shape == (2, 1, 2, 200, 200) assert data[1, 0, 1, 100, 100] == 353 @pytest.mark.skipif(SKIP_PUBLIC, reason=REASON) def test_issue_filesequence_file_parameter(): """Test FileSequence.asarray with 'file' parameter removed in 2022.4.22.""" # https://github.com/bluesky/tiled/pull/97 files = public_files('tifffile/temp_C001T00*.tif') with TiffSequence(files) as tiffs: assert tiffs.shape == (2,) with pytest.raises(TypeError): assert_array_equal(tiffs.asarray(file=files[0]), imread(files[0])) with pytest.raises(TypeError): assert_array_equal(tiffs.asarray(file=1), imread(files[1])) @pytest.mark.skipif(SKIP_PRIVATE, reason=REASON) def test_issue_imagej_prop(): """Test read and write ImageJ prop metadata type.""" # https://github.com/cgohlke/tifffile/issues/103 # also test write indexed ImageJ file fname = private_file('issues/triple-sphere-big-distance=035.tif') with tifffile.TiffFile(fname) as tif: assert tif.is_imagej ijmeta = tif.imagej_metadata assert ijmeta is not None prop = ijmeta['Properties'] assert ijmeta['slices'] == 500 assert not ijmeta['loop'] assert prop['CurrentLUT'] == 'glasbey_on_dark' assert tif.pages.first.photometric == PHOTOMETRIC.PALETTE colormap = tif.pages.first.colormap data = tif.asarray() prop['Test'] = 0.1 with TempFileName('issue_imagej_prop') as fname: ijmeta['axes'] = 'ZYX' imwrite(fname, data, imagej=True, colormap=colormap, metadata=ijmeta) with tifffile.TiffFile(fname) as tif: assert tif.is_imagej ijmeta = tif.imagej_metadata assert ijmeta is not None prop = ijmeta['Properties'] assert ijmeta['slices'] == 500 assert not ijmeta['loop'] assert prop['CurrentLUT'] == 'glasbey_on_dark' assert prop['Test'] == '0.1' assert tif.pages.first.photometric == PHOTOMETRIC.PALETTE colormap = tif.pages.first.colormap image = tif.asarray() assert_array_equal(image, data) @pytest.mark.skipif(SKIP_PRIVATE, reason=REASON) def test_issue_missing_dataoffset(caplog): """Test read file with missing data offset.""" fname = private_file('gdal/bigtiff_header_extract.tif') with tifffile.TiffFile(fname) as tif: page = tif.pages.first assert page.imagewidth == 100000 assert page.imagelength == 100000 assert page.rowsperstrip == 1 assert page.databytecounts == (10000000000,) assert page.dataoffsets == () assert 'incorrect StripOffsets count' in caplog.text assert 'incorrect StripByteCounts count' in caplog.text assert 'missing data offset tag' in caplog.text with pytest.raises(TiffFileError): tif.asarray() @pytest.mark.skipif(SKIP_PRIVATE, reason=REASON) def test_issue_imagej_metadatabytecounts(): """Test read ImageJ file with many IJMetadataByteCounts.""" # https://github.com/cgohlke/tifffile/issues/111 fname = private_file('imagej/issue111.tif') with tifffile.TiffFile(fname) as tif: assert tif.is_imagej page = tif.pages.first assert isinstance(page.tags['IJMetadataByteCounts'].value, tuple) assert isinstance(page.tags['IJMetadata'].value, dict) @pytest.mark.skipif(SKIP_PUBLIC, reason=REASON) def test_issue_description_bytes(): """Test read file with imagedescription bytes.""" # https://github.com/cgohlke/tifffile/issues/112 with TempFileName('issue_description_bytes') as fname: imwrite( fname, [[0]], description='1st description', extratags=[ ('ImageDescription', 1, None, b'\1\128\0', True), ('ImageDescription', 1, None, b'\2\128\0', True), ], metadata=False, ) with TiffFile(fname) as tif: page = tif.pages.first assert page.description == '1st description' assert page.description1 == '' assert page.tags.get(270).value == '1st description' assert page.tags.get(270, index=1).value == b'\1\128\0' assert page.tags.get(270, index=2).value == b'\2\128\0' def test_issue_imagej_colormap(): """Test write 32-bit imagej file with colormap.""" # https://github.com/cgohlke/tifffile/issues/115 colormap = numpy.vstack( [ numpy.zeros(256, dtype=numpy.uint16), numpy.arange(0, 2**16, 2**8, dtype=numpy.uint16), numpy.arange(0, 2**16, 2**8, dtype=numpy.uint16), ] ) metadata = {'min': 0.0, 'max': 1.0, 'Properties': {'CurrentLUT': 'cyan'}} with TempFileName('issue_imagej_colormap') as fname: imwrite( fname, numpy.zeros((16, 16), numpy.float32), imagej=True, colormap=colormap, metadata=metadata, ) with TiffFile(fname) as tif: assert tif.is_imagej assert tif.imagej_metadata is not None assert tif.imagej_metadata['Properties']['CurrentLUT'] == 'cyan' assert tif.pages.first.photometric == PHOTOMETRIC.MINISBLACK assert_array_equal(tif.pages.first.colormap, colormap) @pytest.mark.skipif( SKIP_PRIVATE or SKIP_CODECS or not imagecodecs.WEBP.available, reason=REASON, ) @pytest.mark.parametrize('name', ['tile', 'strip']) def test_issue_webp_rgba(name, caplog): """Test read WebP segments with missing alpha channel.""" # https://github.com/cgohlke/tifffile/issues/122 fname = private_file(f'issues/CMU-1-Small-Region.{name}.webp.tiff') with tifffile.TiffFile(fname) as tif: page = tif.pages.first assert page.compression == COMPRESSION.WEBP assert page.shape == (2967, 2220, 4) assert tuple(page.asarray()[25, 25]) == (246, 244, 245, 255) assert f'corrupted {name}' not in caplog.text @pytest.mark.skipif( SKIP_PRIVATE or SKIP_ZARR or SKIP_CODECS or not imagecodecs.WEBP.available, reason=REASON, ) def test_issue_webp_fsspec(): """Test read WebP segments with missing alpha channel via fsspec.""" try: from imagecodecs.numcodecs import register_codecs except ImportError: register_codecs = None else: register_codecs('imagecodecs_webp', verbose=False) fname = private_file('issues/CMU-1-Small-Region.tile.webp.tiff') url = os.path.dirname(fname).replace('\\', '/') data = imread(fname, series=0) with TempFileName('issue_webp_fsspec', ext='.json') as jsonfile: tiff2fsspec( fname, url, out=jsonfile, series=0, level=0, version=0, ) mapper = fsspec.get_mapper( 'reference://', fo=jsonfile, target_protocol='file', remote_protocol='file', ) zobj = zarr.open(mapper, mode='r') assert_array_equal(zobj[:], data) @pytest.mark.skipif(SKIP_PRIVATE or SKIP_ZARR, reason=REASON) def test_issue_tiffslide(): """Test no ValueError when closing TiffSlide with Zarr group.""" # https://github.com/bayer-science-for-a-better-life/tiffslide/issues/25 try: from tiffslide import TiffSlide except ImportError: pytest.skip('tiffslide missing') fname = private_file('AperioSVS/CMU-1.svs') with TiffSlide(fname) as slide: _ = slide.ts_zarr_grp arr = slide.read_region((100, 200), 0, (256, 256), as_array=True) assert arr.shape == (256, 256, 3) @pytest.mark.skipif(SKIP_ZARR, reason=REASON) def test_issue_xarray(): """Test read Zarr store with fsspec and xarray.""" try: import xarray except ImportError: pytest.skip('xarray missing') data = numpy.random.randint(0, 2**8, (5, 31, 33, 3), numpy.uint8) with TempFileName('issue_xarry.ome') as fname: with tifffile.TiffWriter(fname) as tif: tif.write( data, photometric='rgb', tile=(16, 16), metadata={'axes': 'TYXC'}, ) for squeeze in (True, False): with TempFileName( f'issue_xarry_{squeeze}', ext='.json' ) as jsonfile: with tifffile.TiffFile(fname) as tif: store = tif.series[0].aszarr(squeeze=squeeze) store.write_fsspec( jsonfile, url=os.path.split(jsonfile)[0], groupname='x', ) store.close() mapper = fsspec.get_mapper( 'reference://', fo=jsonfile, target_protocol='file', remote_protocol='file', ) dataset = xarray.open_dataset( mapper, engine='zarr', mask_and_scale=False, backend_kwargs={'consolidated': False}, ) if squeeze: assert dataset['x'].shape == (5, 31, 33, 3) assert dataset['x'].dims == ('T', 'Y', 'X', 'S') else: assert dataset['x'].shape == (5, 1, 1, 31, 33, 3) assert dataset['x'].dims == ('T', 'Z', 'C', 'Y', 'X', 'S') assert_array_equal(data, numpy.squeeze(dataset['x'][:])) del dataset del mapper @pytest.mark.skipif(SKIP_ZARR, reason=REASON) def test_issue_xarray_multiscale(): """Test read multiscale Zarr store with fsspec and xarray.""" try: import xarray except ImportError: pytest.skip('xarray missing') data = numpy.random.randint(0, 2**8, (8, 3, 128, 128), numpy.uint8) with TempFileName('issue_xarry_multiscale.ome') as fname: with tifffile.TiffWriter(fname) as tif: tif.write( data, photometric='rgb', planarconfig='separate', tile=(32, 32), subifds=2, metadata={'axes': 'TCYX'}, ) tif.write( data[:, :, ::2, ::2], photometric='rgb', planarconfig='separate', tile=(32, 32), ) tif.write( data[:, :, ::4, ::4], photometric='rgb', planarconfig='separate', tile=(16, 16), ) for squeeze in (True, False): with TempFileName( f'issue_xarry_multiscale_{squeeze}', ext='.json' ) as jsonfile: with tifffile.TiffFile(fname) as tif: store = tif.series[0].aszarr(squeeze=squeeze) store.write_fsspec( jsonfile, url=os.path.split(jsonfile)[0], # groupname='test', ) store.close() mapper = fsspec.get_mapper( 'reference://', fo=jsonfile, target_protocol='file', remote_protocol='file', ) dataset = xarray.open_dataset( mapper, engine='zarr', mask_and_scale=False, backend_kwargs={'consolidated': False}, ) if squeeze: assert dataset['0'].shape == (8, 3, 128, 128) assert dataset['0'].dims == ('T', 'S', 'Y', 'X') assert dataset['2'].shape == (8, 3, 32, 32) assert dataset['2'].dims == ('T', 'S', 'Y2', 'X2') else: assert dataset['0'].shape == (8, 1, 1, 3, 128, 128) assert dataset['0'].dims == ('T', 'Z', 'C', 'S', 'Y', 'X') assert dataset['2'].shape == (8, 1, 1, 3, 32, 32) assert dataset['2'].dims == ( 'T', 'Z', 'C', 'S', 'Y2', 'X2', ) assert_array_equal(data, numpy.squeeze(dataset['0'][:])) assert_array_equal( data[:, :, ::4, ::4], numpy.squeeze(dataset['2'][:]) ) del dataset del mapper @pytest.mark.parametrize('resolution', [(1, 0), (0, 0)]) def test_issue_invalid_resolution(resolution): """Test invalid resolution tags.""" # https://github.com/imageio/imageio/blob/master/tests/test_tifffile.py data = numpy.zeros((20, 10), dtype=numpy.uint8) with TempFileName(f'issue_invalid_resolution{resolution[0]}') as fname: imwrite(fname, data) with TiffFile(fname, mode='r+') as tif: tags = tif.pages.first.tags tags['XResolution'].overwrite(resolution) tags['YResolution'].overwrite(resolution) with tifffile.TiffFile(fname) as tif: tags = tif.pages.first.tags assert tags['XResolution'].value == resolution assert tags['YResolution'].value == resolution assert__str__(tif) @pytest.mark.skipif(SKIP_PUBLIC, reason=REASON) def test_issue_indexing(): """Test indexing methods.""" fname = public_file('tifffile/multiscene_pyramidal.ome.tif') data0 = imread(fname) assert isinstance(data0, numpy.ndarray) assert data0.shape == (16, 32, 2, 256, 256) level1 = imread(fname, level=1) assert isinstance(level1, numpy.ndarray) assert level1.shape == (16, 32, 2, 128, 128) data1 = imread(fname, series=1) assert isinstance(data1, numpy.ndarray) assert data1.shape == (128, 128, 3) assert_array_equal(data1, imread(fname, key=1024)) assert_array_equal(data1, imread(fname, key=[1024])) assert_array_equal(data1, imread(fname, key=range(1024, 1025))) assert_array_equal(data1, imread(fname, series=1, key=0)) assert_array_equal(data1, imread(fname, series=1, key=[0])) assert_array_equal( data1, imread(fname, series=1, level=0, key=slice(None)) ) assert_array_equal(data0, imread(fname, series=0)) assert_array_equal( data0.reshape(-1, 256, 256), imread(fname, series=0, key=slice(None)) ) assert_array_equal( data0.reshape(-1, 256, 256), imread(fname, key=slice(0, -1, 1)) ) assert_array_equal( data0.reshape(-1, 256, 256), imread(fname, key=range(1024)) ) assert_array_equal(data0[0, 0], imread(fname, key=[0, 1])) assert_array_equal(data0[0, 0], imread(fname, series=0, key=(0, 1))) assert_array_equal( level1.reshape(-1, 128, 128), imread(fname, series=0, level=1, key=slice(None)), ) assert_array_equal( level1.reshape(-1, 128, 128), imread(fname, series=0, level=1, key=range(1024)), ) def test_issue_shaped_metadata(): """Test shaped_metadata property.""" # https://github.com/cgohlke/tifffile/issues/127 shapes = ([5, 33, 31], [31, 33, 3]) with TempFileName('issue_shaped_metadata') as fname: with TiffWriter(fname) as tif: for shape in shapes: tif.write( shape=shape, dtype=numpy.uint8, metadata={'comment': 'a comment', 'number': 42}, ) with TiffFile(fname) as tif: assert tif.is_shaped assert len(tif.series) == 2 assert tif.series[0].kind == 'shaped' assert tif.series[1].kind == 'shaped' meta = tif.shaped_metadata assert meta is not None assert len(meta) == 2 assert meta[0]['shape'] == shapes[0] assert meta[0]['comment'] == 'a comment' assert meta[0]['number'] == 42 assert meta[1]['shape'] == shapes[1] assert meta[1]['comment'] == 'a comment' assert meta[1]['number'] == 42 @pytest.mark.skipif(SKIP_PRIVATE, reason=REASON) def test_issue_uic_dates(caplog): """Test read MetaMorph STK metadata with invalid julian dates.""" # https://github.com/cgohlke/tifffile/issues/129 fname = private_file('issues/Cells-003_Cycle00001_Ch1_000001.ome.tif') with TiffFile(fname) as tif: assert tif.is_stk assert tif.is_ome assert tif.byteorder == '<' assert len(tif.pages) == 1 # assert page properties page = tif.pages.first assert page.is_memmappable assert page.shape == (256, 256) assert page.tags['Software'].value == 'Prairie View 5.4.64.40' assert page.tags['DateTime'].value == '2019:03:18 10:13:33' # assert uic tags with pytest.warns(RuntimeWarning): meta = tif.stk_metadata assert 'no datetime before year 1' in caplog.text assert meta is not None assert meta['CreateTime'] is None assert meta['LastSavedTime'] is None assert meta['DatetimeCreated'] is None assert meta['DatetimeModified'] is None assert meta['Name'] == 'Gattaca' assert meta['NumberPlanes'] == 1 assert meta['Wavelengths'][0] == 1.7906976744186047 def test_issue_subfiletype_zero(): """Test write NewSubfileType=0.""" # https://github.com/cgohlke/tifffile/issues/132 with TempFileName('issue_subfiletype_zero') as fname: imwrite(fname, [[0]], subfiletype=0) with TiffFile(fname) as tif: assert ( tif.pages.first.tags['NewSubfileType'].value == FILETYPE.UNDEFINED ) @pytest.mark.skipif(SKIP_PRIVATE, reason=REASON) def test_issue_imagej_zct_order(caplog): """Test read ImageJ hyperstack with non-TZC order.""" # https://forum.image.sc/t/69430 fname = private_file( 'MMStack/mosaic/d220708_HybISS_AS_cycles1to5_NoBridgeProbes_' 'dim3x3__3_MMStack_2-Pos_000_000.ome.tif' ) data = imread(fname, series=5, is_mmstack=False) fname = private_file( 'MMStack/mosaic/d220708_HybISS_AS_cycles1to5_NoBridgeProbes_' 'dim3x3__3_MMStack_2-Pos_000_001.ome.tif' ) with TiffFile(fname, is_mmstack=False) as tif: assert not tif.is_mmstack assert tif.is_ome assert tif.is_imagej assert tif.imagej_metadata is not None assert tif.imagej_metadata['order'] == 'zct' with caplog.at_level(logging.DEBUG): series = tif.series[0] assert 'OME series is BinaryOnly' in caplog.text assert series.axes == 'CZYX' assert series.kind == 'imagej' assert_array_equal(series.asarray(), data) @pytest.mark.skipif(SKIP_PRIVATE, reason=REASON) def test_issue_fei_sfeg_metadata(): """Test read FEI_SFEG metadata.""" # https://github.com/cgohlke/tifffile/pull/141 # FEI_SFEG tag value is a base64 encoded XML string with BOM header fname = private_file('issues/Helios-AutoSliceAndView.tif') with TiffFile(fname) as tif: fei = tif.fei_metadata assert fei is not None assert fei['User']['User'] == 'Supervisor' assert fei['System']['DisplayHeight'] == 0.324 @pytest.mark.skipif(SKIP_PRIVATE, reason=REASON) def test_issue_resolution(): """Test consistency of reading and writing resolution.""" resolution = (4294967295 / 3904515723, 4294967295 / 1952257861) # 1.1, 2.2 resolutionunit = RESUNIT.CENTIMETER scale = 111.111 with TempFileName('issue_resolution') as fname: imwrite( fname, [[0]], resolution=resolution, resolutionunit=resolutionunit ) with TiffFile(fname) as tif: page = tif.pages.first assert tif.pages.first.tags['XResolution'].value == ( 4294967295, 3904515723, ) assert tif.pages.first.tags['YResolution'].value == ( 4294967295, 1952257861, ) assert tif.pages.first.tags['ResolutionUnit'].value == ( resolutionunit ) assert page.resolution == resolution assert page.resolutionunit == resolutionunit assert page.get_resolution() == resolution assert page.get_resolution(resolutionunit) == resolution assert_array_almost_equal( page.get_resolution(RESUNIT.MICROMETER), (resolution[0] / 10000, resolution[1] / 10000), ) assert_array_almost_equal( page.get_resolution(RESUNIT.MICROMETER, 100), (resolution[0] / 10000, resolution[1] / 10000), ) assert_array_almost_equal( page.get_resolution('inch'), (resolution[0] * 2.54, resolution[1] * 2.54), ) assert_array_almost_equal( page.get_resolution(scale=111.111), (resolution[0] * scale, resolution[1] * scale), ) @pytest.mark.skipif(SKIP_PRIVATE, reason=REASON) def test_issue_resolutionunit(): """Test write resolutionunit defaults.""" # https://github.com/cgohlke/tifffile/issues/145 with TempFileName('issue_resolutionunit_none') as fname: imwrite(fname, [[0]], resolution=None, resolutionunit=None) with TiffFile(fname) as tif: page = tif.pages.first assert tif.pages.first.tags['ResolutionUnit'].value == RESUNIT.NONE assert page.resolutionunit == RESUNIT.NONE assert page.resolution == (1, 1) with TempFileName('issue_resolutionunit_inch') as fname: imwrite(fname, [[0]], resolution=(1, 1), resolutionunit=None) with TiffFile(fname) as tif: page = tif.pages.first assert tif.pages.first.tags['ResolutionUnit'].value == RESUNIT.INCH assert page.resolutionunit == RESUNIT.INCH assert page.resolution == (1, 1) with TempFileName('issue_resolutionunit_imagej') as fname: imwrite( fname, [[0]], dtype=numpy.float32, imagej=True, resolution=(1, 1) ) with TiffFile(fname) as tif: page = tif.pages.first assert tif.pages.first.tags['ResolutionUnit'].value == RESUNIT.NONE assert page.resolutionunit == RESUNIT.NONE assert page.resolution == (1, 1) @pytest.mark.skipif(SKIP_PRIVATE or SKIP_CODECS, reason=REASON) def test_issue_ome_jpeg_colorspace(): """Test colorspace of JPEG segments encoded by BioFormats.""" # https://forum.image.sc/t/69862 # JPEG encoded segments are stored as YCBCR but the # PhotometricInterpretation tag is RGB # CMU-1.svs exported by QuPath 0.3.2 fname = private_file('ome/CMU-1.ome.tif') with TiffFile(fname) as tif: assert tif.is_ome series = tif.series[0].levels[5] assert series.kind == 'ome' assert series.keyframe.is_jfif assert series.shape == (1028, 1437, 3) assert tuple(series.asarray()[800, 200]) == (207, 166, 198) @pytest.mark.skipif(SKIP_PRIVATE, reason=REASON) def test_issue_imagej_compressed(): """Test read ImageJ hyperstack with compression.""" # regression in tifffile 2022.7.28 fname = private_file('imagej/imagej_compressed.tif') with TiffFile(fname) as tif: assert tif.is_imagej assert len(tif.pages) == 120 series = tif.series[0] assert series.kind == 'imagej' assert series.axes == 'ZCYX' assert series.shape == (60, 2, 256, 256) assert series.sizes == { 'depth': 60, 'channel': 2, 'height': 256, 'width': 256, } assert series.keyframe.compression == COMPRESSION.ADOBE_DEFLATE assert series.asarray()[59, 1, 55, 87] == 5643 @pytest.mark.skipif(SKIP_PUBLIC or SKIP_CODECS, reason=REASON) def test_issue_jpeg_rgb(): """Test write JPEG compression in RGB mode.""" # https://github.com/cgohlke/tifffile/issues/146 # requires imagecodecs > 2022.7.31 data = imread(public_file('tifffile/rgb.tif')) assert data.shape == (32, 31, 3) with TempFileName('issue_jpeg_rgb') as fname: imwrite( fname, data, photometric='rgb', subsampling=(1, 1), compression='jpeg', compressionargs={'level': 95, 'outcolorspace': 'rgb'}, ) with TiffFile(fname) as tif: page = tif.pages.first assert page.shape == data.shape assert page.photometric == PHOTOMETRIC.RGB assert page.compression == COMPRESSION.JPEG assert not page.is_jfif image = page.asarray() assert_array_equal(image, imagecodecs.imread(fname)) @pytest.mark.skipif(SKIP_PUBLIC, reason=REASON) def test_issue_imread_out(): """Test imread supports out argument.""" # https://github.com/cgohlke/tifffile/issues/147 fname = public_file('tifffile/rgb.tif') image = imread(fname, out=None) assert isinstance(image, numpy.ndarray) data = imread(fname, out='memmap') assert isinstance(data, numpy.memmap) assert_array_equal(data, image) image = imread([fname, fname], out=None) assert isinstance(image, numpy.ndarray) assert_array_equal(image[1], data) data = imread( [fname, fname], chunkshape=(32, 31, 3), chunkdtype=numpy.uint8, out_inplace=True, out='memmap', ) assert isinstance(data, numpy.memmap) assert_array_equal(data, image) def test_issue_imagej_hyperstack_arg(): """Test write ImageJ with hyperstack argument.""" # https://stackoverflow.com/questions/73279086 with TempFileName('issue_imagej_hyperstack_arg') as fname: data = numpy.zeros((4, 3, 10, 11), dtype=numpy.uint8) imwrite( fname, data, imagej=True, metadata={'hyperstack': True, 'axes': 'TZYX'}, ) with TiffFile(fname) as tif: assert tif.is_imagej assert 'hyperstack=true' in tif.pages.first.description assert tif.imagej_metadata is not None assert tif.imagej_metadata['hyperstack'] assert tif.series[0].axes == 'TZYX' def test_issue_description_overwrite(): """Test user description is not overwritten if metadata is disabled.""" data = numpy.zeros((5, 10, 11), dtype=numpy.uint8) omexml = OmeXml() omexml.addimage( dtype=data.dtype, shape=data.shape, storedshape=(5, 1, 1, 10, 11, 1), axes='ZYX', ) description = omexml.tostring() with TempFileName('issue_description_overwrite') as fname: with tifffile.TiffWriter(fname, ome=False) as tif: for frame in data: tif.write( frame, description=description, metadata=None, contiguous=True, ) description = None with TiffFile(fname) as tif: assert tif.is_ome assert tif.pages.first.description == omexml.tostring() assert tif.series[0].kind == 'ome' assert tif.series[0].axes == 'ZYX' assert_array_equal(tif.asarray(), data) def test_issue_svs_description(): """Test svs_description_metadata function.""" # https://github.com/cgohlke/tifffile/issues/149 assert svs_description_metadata( 'Aperio Image Library vFS90 01\r\n' '159712x44759 [0,100 153271x44659] (256x256) JPEG/RGB Q=30' '|AppMag = 40' '|StripeWidth = 992' '|ScanScope ID = SS1475' '|Filename = 12-0893-1' '|Title = Mag = 40X, compression quality =30' '|Date = 11/20/12' '|Time = 01:06:12' '|Time Zone = GMT-05:00' '|User = 8ce982e3-6ea2-4715-8af3-9874e823e6d9' '|MPP = 0.2472' '|Left = 19.730396' '|Top = 15.537785' '|LineCameraSkew = 0.001417' '|LineAreaXOffset = 0.014212' '|LineAreaYOffset = -0.004733' '|Focus Offset = 0.000000' '|DSR ID = 152.19.62.167' '|ImageID = 311112' '|Exposure Time = 109' '|Exposure Scale = 0.000001' '|DisplayColor = 0' '|OriginalWidth = 159712' '|OriginalHeight = 44759' '|ICC Profile = ScanScope v1' ) == { 'Header': ( 'Aperio Image Library vFS90 01\r\n' '159712x44759 [0,100 153271x44659] (256x256) JPEG/RGB Q=30' ), 'AppMag': 40, 'StripeWidth': 992, 'ScanScope ID': 'SS1475', 'Filename': '12-0893-1', 'Title': 'Mag = 40X, compression quality =30', 'Date': '11/20/12', 'Time': '01:06:12', 'Time Zone': 'GMT-05:00', 'User': '8ce982e3-6ea2-4715-8af3-9874e823e6d9', 'MPP': 0.2472, 'Left': 19.730396, 'Top': 15.537785, 'LineCameraSkew': 0.001417, 'LineAreaXOffset': 0.014212, 'LineAreaYOffset': -0.004733, 'Focus Offset': 0.0, 'DSR ID': '152.19.62.167', 'ImageID': 311112, 'Exposure Time': 109, 'Exposure Scale': 0.000001, 'DisplayColor': 0, 'OriginalWidth': 159712, 'OriginalHeight': 44759, 'ICC Profile': 'ScanScope v1', } assert svs_description_metadata( 'Aperio Image Library v11.0.37\r\n60169x38406 (256x256) JPEG/RGB Q=70' '|Patient=CS-10-SI_HE' '|Accession=' '|User=' '|Date=10/12/2012' '|Time=04:55:13 PM' '|Copyright=Hamamatsu Photonics KK' '|AppMag=20' '|Webslide Files=5329' ) == { 'Header': ( 'Aperio Image Library v11.0.37\r\n' '60169x38406 (256x256) JPEG/RGB Q=70' ), 'Patient': 'CS-10-SI_HE', 'Accession': '', 'User': '', 'Date': '10/12/2012', 'Time': '04:55:13 PM', 'Copyright': 'Hamamatsu Photonics KK', 'AppMag': 20, 'Webslide Files': 5329, } def test_issue_iterator_recursion(): """Test no RecursionError writing large number of tiled pages.""" with TempFileName('issue_iterator_recursion') as fname: imwrite(fname, shape=(1024, 54, 64), dtype=numpy.uint8, tile=(32, 32)) @pytest.mark.skipif(SKIP_CODECS, reason=REASON) def test_issue_predictor_floatx2(): """Test floatx2 predictor.""" # https://github.com/cgohlke/tifffile/issues/167 data = random_data(numpy.float32, (219, 302)) with TempFileName('issue_predictor_floatx2') as fname: imwrite(fname, data, predictor=34894, compression=True) with TiffFile(fname) as tif: assert len(tif.pages) == 1 assert len(tif.series) == 1 page = tif.pages.first assert page.photometric == PHOTOMETRIC.MINISBLACK assert page.imagewidth == 302 assert page.imagelength == 219 assert page.samplesperpixel == 1 assert page.predictor == 34894 assert page.compression == 8 image = page.asarray() assert_array_equal(data, image) assert__str__(tif) @pytest.mark.skipif(SKIP_CODECS, reason=REASON) def test_issue_predictor_deltax2(): """Test deltax2 predictor.""" # https://github.com/cgohlke/tifffile/issues/167 data = random_data(numpy.uint8, (219, 302)) with TempFileName('issue_predictor_deltax2') as fname: with pytest.raises(NotImplementedError): imwrite(fname, data, predictor=34892, compression=8) # with TiffFile(fname) as tif: # assert len(tif.pages) == 1 # assert len(tif.series) == 1 # page = tif.pages.first # assert page.photometric == MINISBLACK # assert page.imagewidth == 302 # assert page.imagelength == 219 # assert page.samplesperpixel == 1 # assert page.predictor == 34892 # assert page.compression == 8 # image = page.asarray() # assert_array_equal(data, image) # assert__str__(tif) @pytest.mark.skipif(SKIP_CODECS, reason=REASON) @pytest.mark.parametrize('compression', ['none', 'packbits', 'zlib', 'lzw']) @pytest.mark.parametrize('predictor', ['none', 'horizontal']) @pytest.mark.parametrize('samples', [0, 1, 3]) def test_issue_tile_generator(compression, predictor, samples): """Test predictor and compression axes with tile generator.""" # https://github.com/cgohlke/tifffile/issues/185 if compression == 'none' and predictor != 'none': pytest.xfail('cannot use predictor without compression') data = numpy.empty( (27, 23, samples) if samples else (27, 23, 1), numpy.uint8 ) data[:] = 199 data[7:9, 11:13] = 13 data[22:25, 19:22] = 11 def tiles(): yield data[:16, :16] yield data[:16, 16:] yield data[16:, :16] yield data[16:, 16:] with TempFileName( f'issue_tile_generator_{compression}_{predictor}_{samples}' ) as fname: imwrite( fname, data=tiles(), shape=(27, 23, samples) if samples > 1 else (27, 23), dtype=numpy.uint8, tile=(16, 16), compression=compression, predictor=predictor, ) assert_array_equal(imread(fname), data.squeeze()) if ( SKIP_CODECS or not imagecodecs.TIFF.available or (compression == 'packbits' and predictor == 'horizontal') ): return assert_array_equal(imagecodecs.imread(fname), data.squeeze()) def test_issue_extratags_filter(caplog): """Test filtering extratags.""" # https://github.com/cgohlke/tifffile/pull/188 with TempFileName('issue_extratags_filter') as fname: imwrite( fname, shape=(10, 10), dtype='u1', extratags=[ (322, 3, 1, 2, False), # TileWidth is filtered (34665, 13, 1, 0, False), # ExifIFD is filtered (270, 2, 0, 'second description', True), # filtered ('FillOrder', 3, 1, 1, True), # by name should go through ], ) assert 'extratag 322' in caplog.text assert 'extratag 34665' in caplog.text with TiffFile(fname) as tif: tags = tif.pages.first.tags assert 322 not in tags assert 34665 not in tags assert tags.get(270, index=1) is None assert tags[266].value == 1 @pytest.mark.skipif( SKIP_PRIVATE or SKIP_CODECS or not imagecodecs.JPEG.available, reason=REASON, ) def test_issue_invalid_predictor(caplog): """Test decoding JPEG compression with invalid predictor tag.""" fname = private_file('issues/invalid_predictor.tiff') with TiffFile(fname) as tif: page = tif.pages.first assert page.predictor == 58240 assert page.compression == 7 data = page.asarray() assert 'ignoring predictor' in caplog.text assert data.shape == (1275, 1650, 4) @pytest.mark.skipif(SKIP_PRIVATE or SKIP_CODECS, reason=REASON) def test_issue_ome_missing_frames(): """Test read OME TIFF with missing pages at end.""" # https://github.com/cgohlke/tifffile/issues/199 fname = private_file('issues/stack_t24_y2048_x2448.tiff') with TiffFile(fname) as tif: assert tif.is_ome assert tif.byteorder == '<' assert len(tif.pages) == 16 assert len(tif.series) == 1 # assert page properties page = tif.pages.first assert page.photometric == PHOTOMETRIC.MINISBLACK assert page.imagewidth == 2448 assert page.imagelength == 2048 assert page.bitspersample == 8 assert not page.is_contiguous # assert series properties series = tif.series[0] assert len(series._pages) == 24 assert len(series.pages) == 24 assert series[16] is None assert series[23] is None assert series.shape == (24, 2048, 2448) assert series.dtype == numpy.uint8 assert series.axes == 'TYX' assert series.kind == 'ome' data = series.asarray() assert_aszarr_method(tif, data) assert_aszarr_method(tif, data, chunkmode='page') assert__str__(tif) @pytest.mark.skipif(not IS_WIN, reason=REASON) def test_issue_maxworkers(): """Test maxworkers defaults.""" if 'TIFFFILE_NUM_THREADS' in os.environ: assert TIFF.MAXWORKERS == int(os.environ['TIFFFILE_NUM_THREADS']) else: assert TIFF.MAXWORKERS == max(1, os.cpu_count() // 2) if 'TIFFFILE_NUM_IOTHREADS' in os.environ: assert TIFF.MAXIOWORKERS == int(os.environ['TIFFFILE_NUM_IOTHREADS']) else: assert TIFF.MAXIOWORKERS == os.cpu_count() + 4 @pytest.mark.skipif(SKIP_PRIVATE, reason=REASON) def test_issue_logging_filter(caplog): """Test raise an error by filtering logging messages.""" # https://github.com/cgohlke/tifffile/issues/216 import logging def log_filter(record): if record.levelno == logging.ERROR: assert record.funcName == '__init__' assert 'invalid value offset' in record.msg raise ValueError(record.msg) return True fname = private_file('bad/Gel1.tif') logger().addFilter(log_filter) try: with pytest.raises(ValueError): imread(fname) finally: logger().removeFilter(log_filter) assert 'invalid value offset' not in caplog.text imread(private_file(fname)) assert 'invalid value offset' in caplog.text @pytest.mark.skipif(SKIP_PRIVATE, reason=REASON) def test_issue_wrong_shape(caplog): """Test rewritten file with wrong shape in metadata.""" # https://github.com/Kitware/UPennContrast/issues/491 fname = private_file('issues/2023_05_23_pw020_ctrl_well1.tif') with TiffFile(fname) as tif: assert tif.is_shaped page = tif.pages.first assert '"shape": [1, 1, 4, 1000, 1000]' in page.description assert '"axes": "ZTCYX"' in page.description assert page.shape == (1000, 1000, 4) series = tif.series[0] assert 'shaped series metadata does not match page' in caplog.text assert series.shape == (1000, 1000, 4) # != (1, 1, 4, 1000, 1000) assert series.axes == 'YXS' # != 'ZTCYX' assert_array_equal(page.asarray(), series.asarray()) def test_issue_exclusive_creation(): """Test TiffWriter with exclusive creation mode 'x'.""" # https://github.com/cgohlke/tifffile/pull/223 with TempFileName('issue_exclusive_creation') as fname: if os.path.exists(fname): os.remove(fname) with FileHandle(fname, mode='x') as fh: assert fh._mode == 'xb' with TiffWriter(fh) as tif: tif.write(shape=(32, 32), dtype=numpy.uint8) with pytest.raises(FileExistsError): with FileHandle(fname, mode='x'): pass def test_issue_non_volumetric(): """Test writing non-volume data with volumetric tile.""" # https://github.com/cgohlke/tifffile/pull/225 data = random_data(numpy.uint8, (3, 13, 17)) with TempFileName('issue_non_volumetric') as fname: with pytest.raises(ValueError): imwrite(fname, data[0], tile=(3, 16, 16)) with pytest.raises(ValueError): imwrite(fname, data, photometric='rgb', tile=(3, 16, 16)) # test some other invalid tiles with pytest.raises(ValueError): imwrite(fname, data, tile=(3, 3, 16, 16), photometric='minisblack') with pytest.raises(ValueError): imwrite(fname, data, tile=(16,), photometric='minisblack') with pytest.raises(ValueError): imwrite(fname, data, photometric='rgb', tile=(15, 16)) with pytest.raises(ValueError): imwrite(fname, data, photometric='rgb', tile=(0, 16)) # test some valid cases imwrite(fname, data, tile=(3, 16, 16), photometric='minisblack') with TiffFile(fname) as tif: page = tif.pages.first assert page.is_tiled assert page.is_volumetric assert page.shape == data.shape imwrite(fname, data, photometric='rgb', tile=(1, 16, 16)) with TiffFile(fname) as tif: page = tif.pages.first assert page.is_tiled assert not page.is_volumetric assert page.shape == data.shape data = random_data(numpy.uint8, (3, 5, 13, 17)) imwrite( fname, data, tile=(5, 16, 16), photometric='rgb', planarconfig='separate', metadata=None, ) with TiffFile(fname) as tif: assert len(tif.pages) == 1 page = tif.pages.first assert page.is_tiled assert page.is_volumetric assert page.photometric == PHOTOMETRIC.RGB assert page.planarconfig == PLANARCONFIG.SEPARATE assert page.shape == data.shape data = random_data(numpy.uint8, (5, 13, 17, 3)) imwrite( fname, data, tile=(5, 16, 16), photometric='rgb', planarconfig='contig', metadata=None, ) with TiffFile(fname) as tif: assert len(tif.pages) == 1 page = tif.pages.first assert page.is_tiled assert page.is_volumetric assert page.photometric == PHOTOMETRIC.RGB assert page.planarconfig == PLANARCONFIG.CONTIG assert page.shape == data.shape data = random_data(numpy.uint8, (5, 13, 17)) imwrite( fname, data, tile=(16, 16), # -> (1, 16, 16) photometric='miniswhite', volumetric=True, metadata=None, ) with TiffFile(fname) as tif: assert len(tif.pages) == 1 page = tif.pages.first assert page.is_tiled assert page.is_volumetric assert page.photometric == PHOTOMETRIC.MINISWHITE assert page.shape == data.shape @pytest.mark.skipif(SKIP_ZARR, reason=REASON) def test_issue_trucated_tileoffsets(): """Test reading truncated tile offsets and bytecounts.""" # https://github.com/cgohlke/tifffile/issues/227 data = random_data(numpy.uint8, (131, 128)) with TempFileName('issue_trucated_tileoffsets') as fname: imwrite(fname, data, tile=(64, 64)) with TiffFile(fname, mode='r+') as tif: # truncate TileOffsets and TileByteCounts tag = tif.pages[0].tags[324] assert tag.count == 6 tag.overwrite(tag.value[:4]) tag = tif.pages[0].tags[325] tag.overwrite(tag.value[:4]) image = imread(fname) assert_raises(AssertionError, assert_array_equal, image, data) assert_array_equal(image[:128], data[:128]) store = imread(fname, aszarr=True) za = zarr.open(store, mode='r')[:] store.close() assert_array_equal(za, image) @pytest.mark.parametrize('buffersize', [0, 1, 1234]) @pytest.mark.parametrize('tile', [0, 1]) def test_issue_buffersize(buffersize, tile): """Test reading and writing with custom buffersize.""" data = random_data(numpy.uint8, (5, 131, 133)) with TempFileName(f'issue_buffersize_{buffersize}_{tile}') as fname: kwargs = {'tile': (32, 32)} if tile else {'rowsperstrip': 8} imwrite( fname, data, maxworkers=2, buffersize=buffersize, compression='zlib', **kwargs, ) im = imread(fname, maxworkers=2, buffersize=buffersize) assert_array_equal(im, data) @pytest.mark.skipif(SKIP_PRIVATE, reason=REASON) def test_issue_ideas(caplog): """Test reading invalid TIFF produced by IDEAS.""" # https://forum.image.sc/t/96103 # NewSubfileType tag is of type bytes # FillOrder tag is zero fname = private_file('IDEAS/IDEAS_file.ome.tif') with TiffFile(fname) as tif: assert '0 is not a valid FILLORDER' in caplog.text assert 'invalid self.subfiletype=b' in caplog.text assert tif.is_ome page = tif.pages.first assert page.tags['NewSubfileType'].value == b'\x02\x00' assert page.tags['FillOrder'].value == 0 series = tif.series[0] assert series.shape == (37, 29) assert series.axes == 'YX' assert_array_equal(page.asarray(), series.asarray()) @pytest.mark.skipif(SKIP_PRIVATE, reason=REASON) def test_issue_nodata_invalid(caplog): """Test read GeoTIFF with invalid nodata.""" fname = private_file('GeoTIFF/nodata_corrupted.tiff') with TiffFile(fname) as tif: assert 'GDAL_NODATA tag raised ValueError' in caplog.text assert tif.byteorder == '<' assert len(tif.pages) == 1 assert len(tif.series) == 1 assert tif.is_geotiff assert tif.is_gdal # assert page properties page = tif.pages.first assert page.dtype == numpy.uint8 assert page.nodata == 0 assert page.tags['GDAL_NODATA'].value == '-99999999' # assert series properties series = tif.series[0] assert series.shape == (920, 1300) assert series.dtype == numpy.uint8 assert series.axes == 'YX' assert series.kind == 'uniform' # assert data data = tif.asarray() assert isinstance(data, numpy.ndarray) assert data[500, 600] == 85 assert data[0, 0] == 101 assert_aszarr_method(tif, data) del data assert__str__(tif) @pytest.mark.skipif(SKIP_PRIVATE, reason=REASON) def test_issue_tag_readfunc(caplog): """Test allow tag reader functions to fail.""" # file has an OlympusSIS tag, which is pointing to invalid structure fname = private_file('SIS/cirb_NOT_SIS.tif') with TiffFile(fname) as tif: # assert 'invalid OlympusSIS structure' in caplog.text assert tif.pages.first.tags[33560].value == (1382303888,) assert 'invalid OlympusSIS structure' in caplog.text assert tif.series[0].kind == 'generic' data = tif.asarray() assert data.shape == (800, 800, 4) assert data[166, 290, 2] == 255 @pytest.mark.skipif( SKIP_CODECS or not imagecodecs.JPEG8.available or not imagecodecs.JPEG2K.available or not imagecodecs.JPEGXL.available or imagecodecs.JPEG8.legacy, reason=REASON, ) def test_issue_jpeg_bitspersample(): """Test write JPEG with many bitpersample.""" # https://github.com/cgohlke/tifffile/pull/265 data6 = random_data(numpy.uint8, (131, 128, 3)) >> 2 data12 = random_data(numpy.uint16, (131, 128, 3)) >> 4 jpeg12 = imagecodecs.jpeg_encode(data12, bitspersample=12, lossless=True) with TempFileName('issue_jpeg_bitspersample') as fname: with TiffWriter(fname) as tif: tif.write( data6, photometric='rgb', compression='jpeg', compressionargs={'lossless': True}, bitspersample=6, ) tif.write( data12, photometric='rgb', compression='jpeg', compressionargs={'lossless': True, 'bitspersample': 12}, # bitspersample=12, ) tif.write( iter((jpeg12,)), shape=data12.shape, dtype=data12.dtype, photometric='rgb', compression='jpeg', bitspersample=14, ) tif.write( data12, photometric='rgb', compression='jpeg2000', compressionargs={'bitspersample': 12}, ) tif.write( data12, photometric='rgb', compression='jpegxl', compressionargs={'bitspersample': 12}, ) with pytest.raises(ValueError): tif.write( data12, photometric='rgb', compression='jpeg', bitspersample=17, ) with pytest.raises(ValueError): tif.write( data12, photometric='rgb', compression='jpeg2000', compressionargs={'bitspersample': 0}, ) with TiffFile(fname) as tif: assert len(tif.pages) == 5 assert tif.pages[0].bitspersample == 6 assert tif.pages[1].bitspersample == 12 assert tif.pages[2].bitspersample == 14 assert tif.pages[3].bitspersample == 12 assert tif.pages[4].bitspersample == 12 assert_array_equal(tif.series[0].asarray(), data6) assert_array_equal(tif.series[1].asarray(), data12) assert_array_equal(tif.series[2].asarray(), data12) assert_array_equal(tif.series[3].asarray(), data12) assert_array_equal(tif.series[4].asarray(), data12) assert__str__(tif) def test_issue_ometiff_modulo(): """Test writing OME-TIFF with modulo dimension.""" # required for PhasorPy data = random_data(numpy.uint8, (3, 3, 31, 33)) with TempFileName('issue_ometiff_modulo') as fname: with TiffWriter(fname, ome=True) as tif: tif.write( data, photometric='minisblack', metadata={'axes': 'HTYX'} ) tif.write( data, photometric='minisblack', metadata={'axes': 'QZYX'} ) with TiffFile(fname) as tif: series = tif.series[0] assert series.axes == 'HTYX' assert series.shape == (3, 3, 31, 33) series = tif.series[1] assert series.axes == 'QZYX' assert series.shape == (3, 3, 31, 33) def test_issue_ometiff_pixel(): """Test writing OME-TIFF three one pixel images.""" # required for PhasorPy data = random_data(numpy.uint8, (3, 1, 1)) with TempFileName('issue_ometiff_pixel') as fname: with TiffWriter(fname, ome=True) as tif: tif.write(data, photometric='minisblack', metadata={'axes': 'TYX'}) with TiffFile(fname) as tif: series = tif.series[0] assert series.axes == 'TYX' assert series.shape == (3, 1, 1) @pytest.mark.parametrize('arg', [None, 'none', 'None', 'NONE', 1]) def test_issue_none_str(arg): """Test predictor and compression none arguments.""" # https://github.com/cgohlke/tifffile/pull/274 with TempFileName(f'issue_none_str_{arg}') as fname: imwrite( fname, shape=(27, 23), dtype=numpy.uint8, compression=arg, predictor=arg, ) assert imread(fname).shape == (27, 23) class TestExceptions: """Test various Exceptions and Warnings.""" data = random_data(numpy.uint16, (5, 13, 17)) @pytest.fixture(scope='class') def fname(self): with TempFileName('exceptions') as fname: yield fname def test_nofiles(self): # no files found with pytest.raises(ValueError): imread('*.exceptions') def test_memmap(self, fname): # not memory-mappable imwrite(fname, self.data, compression=8) with pytest.raises(ValueError): memmap(fname) with pytest.raises(ValueError): memmap(fname, page=0) def test_dimensions(self, fname): # dimensions too large with pytest.raises(ValueError): imwrite(fname, shape=(4294967296, 32), dtype=numpy.uint8) def test_no_shape_dtype_empty(self, fname): # shape and dtype missing for empty array with pytest.raises(ValueError): imwrite(fname) def test_no_shape_dtype_iter(self, fname): # shape and dtype missing for iterator with pytest.raises(ValueError): imwrite(fname, iter(self.data)) def test_no_shape_dtype(self, fname): # shape and dtype missing with TiffWriter(fname) as tif: with pytest.raises(ValueError): tif.write() def test_no_shape(self, fname): # shape missing with TiffWriter(fname) as tif: with pytest.raises(ValueError): tif.write(iter(self.data), dtype='u2') def test_no_dtype(self, fname): # dtype missing with TiffWriter(fname) as tif: with pytest.raises(ValueError): tif.write(iter(self.data), shape=(5, 13, 17)) def test_mismatch_dtype(self, fname): # dtype wrong with pytest.raises(ValueError): imwrite(fname, self.data, dtype='f4') def test_mismatch_shape(self, fname): # shape wrong with pytest.raises(ValueError): imwrite(fname, self.data, shape=(2, 13, 17)) def test_byteorder(self, fname): # invalid byteorder with pytest.raises(ValueError): imwrite(fname, self.data, byteorder='?') def test_truncate_compression(self, fname): # truncate cannot be used with compression, packints, or tiles with pytest.raises(ValueError): imwrite(fname, self.data, compression=8, truncate=True) def test_truncate_ome(self, fname): # truncate cannot be used with ome-tiff with pytest.raises(ValueError): imwrite(fname, self.data, ome=True, truncate=True) def test_truncate_noshape(self, fname): # truncate cannot be used with shaped=False with pytest.raises(ValueError): imwrite(fname, self.data, shaped=False, truncate=True) def test_compression(self, fname): # invalid compression with pytest.raises(TypeError): imwrite(fname, self.data, compression=(8, None, None, None)) def test_predictor_dtype(self, fname): # cannot apply predictor to dtype complex with pytest.raises(ValueError): imwrite( fname, self.data.astype('F'), predictor=True, compression=8 ) def test_predictor_float(self, fname): # cannot apply horizontal predictor to float with pytest.raises(ValueError): imwrite(fname, self.data.astype('f'), predictor=2, compression=8) def test_predictor_uint(self, fname): # cannot apply float predictor to uint with pytest.raises(ValueError): imwrite(fname, self.data.astype('H'), predictor=3, compression=8) def test_predictor_wrong_compression(self, fname): # cannot apply predictor to with image compression schemes with pytest.raises(ValueError): imwrite(fname, self.data.astype('B'), predictor=2, compression=7) def test_predictor_no_compression(self, fname): # cannot apply predictor to without compression with pytest.raises(ValueError): imwrite(fname, self.data.astype('B'), predictor=2, compression=1) def test_ome_imagedepth(self, fname): # OME-TIFF does not support ImageDepth with pytest.raises(ValueError): imwrite(fname, self.data, ome=True, volumetric=True) def test_imagej_dtype(self, fname): # ImageJ does not support dtype with pytest.raises(ValueError): imwrite(fname, self.data.astype('f8'), imagej=True) def test_imagej_imagedepth(self, fname): # ImageJ does not support ImageDepth with pytest.raises(ValueError): imwrite(fname, self.data, imagej=True, volumetric=True) def test_imagej_float_rgb(self, fname): # ImageJ does not support float with rgb with pytest.raises(ValueError): imwrite( fname, self.data[..., :3].astype('f4'), imagej=True, photometric='rgb', ) def test_imagej_planar(self, fname): # ImageJ does not support planar with pytest.raises(ValueError): imwrite(fname, self.data, imagej=True, planarconfig='separate') def test_colormap_shape(self, fname): # invalid colormap shape with pytest.raises(ValueError): imwrite( fname, self.data.astype('u1'), photometric='palette', colormap=numpy.empty((3, 254), 'u2'), ) def test_colormap_shape2(self, fname): # invalid colormap shape with pytest.raises(ValueError): imwrite( fname, self.data.astype('u1'), colormap=numpy.empty((3, 254), 'u2'), ) def test_colormap_shape3(self, fname): # invalid dtype for colormap with pytest.warns(UserWarning): imwrite( fname, self.data.astype('f2'), colormap=numpy.empty((3, 256), 'u2'), ) with TiffFile(fname) as tif: assert tif.pages.first.colormap is None def test_colormap_dtype(self, fname): # invalid colormap dtype with pytest.raises(ValueError): imwrite( fname, self.data.astype('u1'), photometric='palette', colormap=numpy.empty((3, 255), 'i2'), ) def test_palette_dtype(self, fname): # invalid dtype for palette mode with pytest.raises(ValueError): imwrite( fname, self.data.astype('u4'), photometric='palette', colormap=numpy.empty((3, 255), 'u2'), ) def test_cfa_shape(self, fname): # invalid shape for CFA with pytest.raises(ValueError): imwrite(fname, self.data, photometric='cfa') def test_subfiletype_mask(self, fname): # invalid SubfileType MASK with pytest.raises(ValueError): imwrite(fname, self.data, subfiletype=0b100) def test_bitspersample_bilevel(self, fname): # invalid bitspersample for bilevel with pytest.raises(ValueError): imwrite(fname, self.data.astype('?'), bitspersample=2) def test_bitspersample_jpeg(self, fname): # invalid bitspersample for jpeg with pytest.raises(ValueError): imwrite(fname, self.data, compression='jpeg', bitspersample=17) def test_datetime(self, fname): # invalid datetime with pytest.raises(ValueError): imwrite(fname, self.data, datetime='date') def test_rgb(self, fname): # not a RGB image with pytest.raises(ValueError): imwrite(fname, self.data[:2], photometric='rgb') def test_extrasamples(self, fname): # invalid extrasamples with pytest.raises(ValueError): imwrite( fname, self.data, photometric='rgb', extrasamples=(0, 1, 2) ) def test_subsampling(self, fname): # invalid subsampling with pytest.raises(ValueError): imwrite( fname, self.data[..., :3], photometric='rgb', compression=7, subsampling=(3, 3), ) def test_compress_bilevel(self, fname): # cannot compress bilevel image with pytest.raises(NotImplementedError): imwrite(fname, self.data.astype('?'), compression='jpeg') def test_description_unicode(self, fname): # strings must be 7-bit ASCII with pytest.raises(ValueError): imwrite(fname, self.data, description='mu: \u03bc') def test_compression_contiguous(self, fname): # contiguous cannot be used with compression, tiles with TiffWriter(fname) as tif: tif.write(self.data[0]) with pytest.raises(ValueError): tif.write(self.data[1], contiguous=True, compression=8) def test_imagej_contiguous(self, fname): # ImageJ format does not support non-contiguous series with TiffWriter(fname, imagej=True) as tif: tif.write(self.data[0]) with pytest.raises(ValueError): tif.write(self.data[1], contiguous=False) def test_subifds_subifds(self, fname): # SubIFDs in SubIFDs are not supported with TiffWriter(fname) as tif: tif.write(self.data[0], subifds=1) with pytest.raises(ValueError): tif.write(self.data[1], subifds=1) def test_subifds_truncate(self, fname): # SubIFDs cannot be used with truncate with TiffWriter(fname) as tif: tif.write(self.data, subifds=1, truncate=True) with pytest.raises(ValueError): tif.write(self.data[:, ::2, ::2]) def test_subifds_imwrite(self, fname): # imwrite cannot be used to write SubIFDs with pytest.raises(TypeError): imwrite(fname, self.data, subifds=1) def test_iter_bytes(self, fname): # iterator contains wrong number of bytes with TiffWriter(fname) as tif: with pytest.raises(ValueError): tif.write( iter([b'abc']), shape=(13, 17), dtype=numpy.uint8, rowsperstrip=13, ) def test_iter_dtype(self, fname): # iterator contains wrong dtype with TiffWriter(fname) as tif: with pytest.raises(ValueError): tif.write( iter(self.data), shape=(5, 13, 17), dtype=numpy.uint8, rowsperstrip=13, ) with TiffWriter(fname) as tif: with pytest.raises(ValueError): tif.write( iter(self.data), shape=(5, 13, 17), dtype=numpy.uint8, rowsperstrip=5, compression=8, ) def test_tiff_enums(self): # TIFF.COMPRESSION and others are deprecated with pytest.raises(AttributeError): TIFF.COMPRESSION with pytest.raises(AttributeError): TIFF.EXTRASAMPLE with pytest.raises(AttributeError): TIFF.FILLORDER with pytest.raises(AttributeError): TIFF.PHOTOMETRIC with pytest.raises(AttributeError): TIFF.PLANARCONFIG with pytest.raises(AttributeError): TIFF.PREDICTOR with pytest.raises(AttributeError): TIFF.RESUNIT with pytest.raises(AttributeError): TIFF.RESUNIT # def test_extratags(self, fname): # # invalid dtype or count # with pytest.raises(ValueError): # imwrite(fname, data, extratags=[()]) def test_container_binaryio(self, fname): # container cannot be binaryio with pytest.raises(ValueError): with FileHandle(fname, 'r+') as fh: imread(fh, container=fname) def test_imagej_colormap(self, fname): # invalid colormap shape with pytest.raises(ValueError): imwrite( fname, shape=(31, 33), dtype=numpy.float32, imagej=True, colormap=[1, 2, 3], ) def test_imagej_colormap_u2(self, fname): # ImageJ does not support TIFF colormap shape with pytest.raises(ValueError): imwrite( fname, self.data.astype('u2'), imagej=True, colormap=numpy.empty((3, 2**16), 'u2'), ) def test_imagej_colormap_rgb(self, fname): # do not write colormap with RGB data = self.data.astype('u1')[:3] data = numpy.moveaxis(data, 0, -1) with pytest.warns(UserWarning): imwrite( fname, data, photometric='rgb', imagej=True, colormap=numpy.empty((3, 256), 'u2'), ) with TiffFile(fname) as tif: assert tif.pages.first.colormap is None def test_photometric(self, fname): # not a RGB image with pytest.raises(ValueError): imwrite( fname, shape=(31, 33, 2), dtype=numpy.uint8, photometric='rgb' ) def test_tile(self, fname): # invalid tile in iterator def tile(): yield numpy.empty((32, 32), dtype=numpy.uint16) with pytest.raises(ValueError): imwrite( fname, tile(), shape=(31, 33), dtype=numpy.uint16, tile=(16, 16), ) with pytest.raises(ValueError): imwrite( fname, tile(), shape=(31, 33), dtype=numpy.uint8, tile=(32, 32) ) def test_description(self, fname): # invalid description type with pytest.raises(ValueError): imwrite( fname, shape=(31, 33), dtype=numpy.uint8, description=32, metadata=None, ) def test_omexml(self, fname): # invalid OME-XML imwrite(fname, shape=(31, 33), dtype=numpy.uint8) with pytest.raises(ValueError): imread(fname, omexml='OME') def test_key(self, fname): # invalid key imwrite(fname, shape=(31, 33), dtype=numpy.uint8) with pytest.raises(TypeError): imread(fname, key='1') with pytest.raises(TypeError): imread(fname, key='1', series=0) def test_filemode(self, fname): # invalid file open mode with pytest.raises(ValueError): imread(fname, mode='no') @pytest.mark.skipif( SKIP_CODECS or not imagecodecs.LERC.available, reason=REASON ) def test_lerc_compression(self, fname): # invalid LERC compression with pytest.raises(ValueError): imwrite( fname, shape=(31, 33), dtype=numpy.uint8, compression='LERC', compressionargs={'compression': 'jpeg'}, ) @pytest.mark.skipif(SKIP_PUBLIC, reason=REASON) def test_sequence_imread_dtype(self): # dtype argument is deprecated files = public_files('tifffile/temp_C001T00*.tif') with pytest.raises(TypeError): imread(files, dtype=numpy.float64) @pytest.mark.skipif(SKIP_PUBLIC, reason=REASON) def test_filesequence_dtype(self): # dtype argument is deprecated files = public_files('tifffile/temp_C001T00*.tif') with pytest.raises(TypeError): TiffSequence(files).asarray(dtype=numpy.float64) with pytest.raises(TypeError): TiffSequence(files).asarray(dtype=numpy.float64) with pytest.raises(TypeError): TiffSequence(files).aszarr(dtype=numpy.float64) ############################################################################### # Test specific functions and classes def test_class_tiffformat(): """Test TiffFormat class.""" tiff = TIFF.NDPI_LE assert not tiff.is_bigtiff assert tiff.is_ndpi str(tiff) repr(tiff) def test_class_filecache(): """Test FileCache class.""" with TempFileName('class_filecache') as fname: cache = FileCache(3) with open(fname, 'wb') as fh: fh.close() # create 6 handles, leaving only first one open handles = [] for i in range(6): fh = FileHandle(fname) if i > 0: fh.close() handles.append(fh) # open all files for fh in handles: cache.open(fh) assert len(cache) == 6 for i, fh in enumerate(handles): assert not fh.closed assert cache.files[fh] == 1 if i else 2 # close all files: only first file and recently used files are open for fh in handles: cache.close(fh) assert len(cache) == 3 for i, fh in enumerate(handles): assert fh.closed == (0 < i < 4) if not 0 < i < 4: assert cache.files[fh] == 0 if i else 1 # open all files, then clear cache: only first file is open for fh in handles: cache.open(fh) cache.clear() assert len(cache) == 1 assert handles[0] in cache.files for i, fh in enumerate(handles): assert fh.closed == (i > 0) # randomly open and close files for i in range(13): fh = handles[random.randint(0, 5)] cache.open(fh) cache.close(fh) assert len(cache) <= 3 assert fh in cache.files assert handles[0] in cache.files # randomly read from files for i in range(13): fh = handles[random.randint(0, 5)] cache.read(fh, 0, 0) assert len(cache) <= 3 assert fh in cache.files assert handles[0] in cache.files # clear cache: only first file is open cache.clear() assert len(cache) == 1 assert handles[0] in cache.files for i, fh in enumerate(handles): assert fh.closed == (i > 0) # open and close all files twice for fh in handles: cache.open(fh) cache.open(fh) assert len(cache) == 6 for i, fh in enumerate(handles): assert not fh.closed assert cache.files[fh] == 2 if i else 3 # close files once for fh in handles: cache.close(fh) assert len(cache) == 6 for i, fh in enumerate(handles): assert not fh.closed assert cache.files[fh] == 1 if i else 2 # close files twice for fh in handles: cache.close(fh) assert len(cache) == 3 for i, fh in enumerate(handles): assert fh.closed == (0 < i < 4) if not 0 < i < 4: assert cache.files[fh] == 0 if i else 1 # close all files cache.clear() handles[0].close() @pytest.mark.skipif(SKIP_PRIVATE, reason=REASON) def test_class_tifftag_astuple(): """Test TiffTag.astuple method.""" fname = private_file('imagej/IJMetadata.tif') with TiffFile(fname) as tif: tags = tif.pages.first.tags assert tags['BitsPerSample'].astuple() == ( 258, 3, 1, b'\x00\x08', True, ) assert tags['ImageDescription'].astuple() == ( 270, 2, 60, b'ImageJ=1.52b\nimages=3\nchannels=3\nmode=composite\nloop=false' b'\x00\x00', True, ) assert tags['IJMetadataByteCounts'].astuple() == ( 50838, 4, 12, b'\x00\x00\x004\x00\x00\x07\xa8\x00\x00\x00\x06\x00\x00\x00\n\x00' b'\x00\x00\x08\x00\x00\x000\x00\x00\x03\x00\x00\x00\x03\x00\x00' b'\x00\x03\x00\x00\x00\x04\xa4\x00\x00\x00\xc0\x00\x00\x00\x80', True, ) assert tags['IJMetadata'].astuple()[:3] == (50839, 1, 5896) @pytest.mark.parametrize('bigtiff', [False, True]) @pytest.mark.parametrize('byteorder', ['<', '>']) def test_class_tifftag_overwrite(bigtiff, byteorder): """Test TiffTag.overwrite method.""" data = numpy.ones((16, 16, 3), dtype=byteorder + 'i2') bt = '_bigtiff' if bigtiff else '' bo = 'be' if byteorder == '>' else 'le' with TempFileName(f'class_tifftag_overwrite_{bo}{bt}') as fname: imwrite( fname, data, bigtiff=bigtiff, photometric=PHOTOMETRIC.RGB, software='in', ) with TiffFile(fname, mode='r+') as tif: tags = tif.pages.first.tags # inline -> inline tag = tags[305] t305 = tag.overwrite('inl') assert tag.valueoffset == t305.valueoffset valueoffset = tag.valueoffset # xresolution tag = tags[282] t282 = tag.overwrite((2000, 1000)) assert tag.valueoffset == t282.valueoffset # sampleformat, int -> uint tag = tags[339] t339 = tags[339].overwrite((1, 1, 1)) assert tag.valueoffset == t339.valueoffset with TiffFile(fname) as tif: tags = tif.pages.first.tags tag = tags[305] assert tag.value == 'inl' assert tag.count == t305.count tag = tags[282] assert tag.value == (2000, 1000) assert tag.count == t282.count tag = tags[339] assert tag.value == (1, 1, 1) assert tag.count == t339.count # use bytes, specify dtype with TiffFile(fname, mode='r+') as tif: tags = tif.pages.first.tags # xresolution tag = tags[282] fmt = byteorder + '2I' t282 = tag.overwrite(struct.pack(fmt, 2500, 1500), dtype=fmt) assert tag.valueoffset == t282.valueoffset with TiffFile(fname) as tif: tags = tif.pages.first.tags tag = tags[282] assert tag.value == (2500, 1500) assert tag.count == t282.count # inline -> separate with TiffFile(fname, mode='r+') as tif: tag = tif.pages.first.tags[305] t305 = tag.overwrite('separate') assert tag.valueoffset != t305.valueoffset # separate at end -> separate longer with TiffFile(fname, mode='r+') as tif: tag = tif.pages.first.tags[305] assert tag.value == 'separate' assert tag.valueoffset == t305.valueoffset t305 = tag.overwrite('separate longer') assert tag.valueoffset == t305.valueoffset # overwrite, not append # separate -> separate shorter with TiffFile(fname, mode='r+') as tif: tag = tif.pages.first.tags[305] assert tag.value == 'separate longer' assert tag.valueoffset == t305.valueoffset t305 = tag.overwrite('separate short') assert tag.valueoffset == t305.valueoffset # separate -> separate longer with TiffFile(fname, mode='r+') as tif: tag = tif.pages.first.tags[305] assert tag.value == 'separate short' assert tag.valueoffset == t305.valueoffset filesize = tif.filehandle.size t305 = tag.overwrite('separate longer') assert tag.valueoffset != t305.valueoffset assert t305.valueoffset == filesize # append to end # separate -> inline with TiffFile(fname, mode='r+') as tif: tag = tif.pages.first.tags[305] assert tag.value == 'separate longer' assert tag.valueoffset == t305.valueoffset t305 = tag.overwrite('inl') assert tag.valueoffset != t305.valueoffset assert t305.valueoffset == valueoffset # inline - > erase with TiffFile(fname, mode='r+') as tif: tag = tif.pages.first.tags[305] assert tag.value == 'inl' assert tag.valueoffset == t305.valueoffset with pytest.raises(TypeError): t305 = tag.overwrite(tif, '') t305 = tag.overwrite('') assert tag.valueoffset == t305.valueoffset with TiffFile(fname) as tif: tag = tif.pages.first.tags[305] assert tag.value == '' assert tag.valueoffset == t305.valueoffset # change dtype with TiffFile(fname, mode='r+') as tif: tags = tif.pages.first.tags # imagewidth tag = tags[256] t256 = tag.overwrite(tag.value, dtype=3) assert tag.valueoffset == t256.valueoffset with TiffFile(fname) as tif: tags = tif.pages.first.tags tag = tags[256] assert tag.value == 16 assert tag.count == t256.count if not bigtiff: assert_valid_tiff(fname) @pytest.mark.skipif( SKIP_LARGE or SKIP_PRIVATE or SKIP_CODECS or not imagecodecs.JPEG.available, reason=REASON, ) def test_class_tifftag_overwrite_ndpi(): """Test TiffTag.overwrite method on 64-bit NDPI file.""" fname = private_file('HamamatsuNDPI/103680x188160.ndpi') with TiffFile(fname, mode='r+') as tif: assert tif.is_ndpi tags = tif.pages.first.tags # inline, old value 32-bit assert tags['ImageWidth'].value == 188160 tags['ImageWidth'].overwrite(0) tags['ImageWidth'].overwrite(188160) # separate, smaller or same length assert tags['Model'].value == 'C13220' tags['Model'].overwrite('C13220') with pytest.raises(struct.error): # new offset > 32-bit tags['Model'].overwrite('C13220-') assert tags['StripByteCounts'].value == (4461521316,) with pytest.raises(ValueError): # old value > 32-bit tags['StripByteCounts'].overwrite(0) with TiffFile(fname, mode='r') as tif: assert tif.is_ndpi tags = tif.pages.first.tags assert tags['ImageWidth'].value == 188160 assert tags['Model'].value == 'C13220' assert tags['StripByteCounts'].value == (4461521316,) def test_class_tifftags(): """Test TiffTags interface.""" data = random_data(numpy.uint8, (21, 31)) with TempFileName('class_tifftags') as fname: imwrite(fname, data, description='test', software=False) with TiffFile(fname) as tif: tags = tif.pages.first.tags # assert len(tags) == 14 assert 270 in tags assert 'ImageDescription' in tags assert tags[270].value == 'test' assert tags['ImageDescription'].value == 'test' assert tags.get(270).value == 'test' assert tags.get('ImageDescription').value == 'test' assert tags.get(270, index=0).value == 'test' assert tags.get('ImageDescription', index=0).value == 'test' assert tags.get(270, index=1).value.startswith('{') assert tags.get('ImageDescription', index=1).value.startswith('{') assert tags.get(270, index=2) is None assert tags.get('ImageDescription', index=2) is None assert tags.getall(270)[0].value == 'test' assert tags.getall(270)[1].value.startswith('{') assert tags.getall('ImageDescription')[0].value == 'test' assert tags.getall('ImageDescription')[1].value.startswith('{') assert tags.getall(0, default=1) == 1 assert tags.getall('0', default=1) == 1 assert tags.getall(None, default=1) == 1 assert len(tags.getall(270)) == 2 assert 305 not in tags assert 'Software' not in tags assert tags.get(305) is None assert tags.get('Software') is None with pytest.raises(KeyError): tags[305].value with pytest.raises(KeyError): tags['Software'].value assert len(tags.values()) == len(tags.items()) assert len(tags.keys()) == len(tags.items()) - 1 assert set(tags.keys()) == {i[0] for i in tags.items()} assert list(tags.values()) == [i[1] for i in tags.items()] assert list(tags.values()) == list(tags) tag270 = tags[270] del tags[270] assert 270 not in tags assert 'ImageDescription' not in tags with pytest.raises(KeyError): del tags[270] with pytest.raises(KeyError): del tags['ImageDescription'] tags.add(tag270) assert 270 in tags assert 'ImageDescription' in tags del tags['ImageDescription'] assert 270 not in tags assert 'ImageDescription' not in tags tags[270] = tag270 assert 270 in tags assert 'ImageDescription' in tags assert 0 not in tags assert 'None' not in tags assert None not in tags assert 'TiffTags' in repr(tags) def test_class_tifftagregistry(): """Test TiffTagRegistry.""" numtags = 666 tags = TIFF.TAGS assert len(tags) == numtags assert tags[11] == 'ProcessingSoftware' assert tags['ProcessingSoftware'] == 11 assert tags.getall(11) == ['ProcessingSoftware'] assert tags.getall('ProcessingSoftware') == [11] tags.add(11, 'ProcessingSoftware') assert len(tags) == numtags # one code with two names assert 34853 in tags assert 'GPSTag' in tags assert 'OlympusSIS2' in tags assert tags[34853] == 'GPSTag' assert tags['GPSTag'] == 34853 assert tags['OlympusSIS2'] == 34853 assert tags.getall(34853) == ['GPSTag', 'OlympusSIS2'] assert tags.getall('GPSTag') == [34853] del tags[34853] assert len(tags) == numtags - 2 assert 34853 not in tags assert 'GPSTag' not in tags assert 'OlympusSIS2' not in tags tags.add(34853, 'GPSTag') tags.add(34853, 'OlympusSIS2') assert 34853 in tags assert 'GPSTag' in tags assert 'OlympusSIS2' in tags info = str(tags) assert "34853, 'GPSTag'" in info assert "34853, 'OlympusSIS2'" in info # two codes with same name assert 37387 in tags assert 41483 in tags assert 'FlashEnergy' in tags assert tags[37387] == 'FlashEnergy' assert tags[41483] == 'FlashEnergy' assert tags['FlashEnergy'] == 37387 assert tags.getall('FlashEnergy') == [37387, 41483] assert tags.getall(37387) == ['FlashEnergy'] assert tags.getall(41483) == ['FlashEnergy'] del tags['FlashEnergy'] assert len(tags) == numtags - 2 assert 37387 not in tags assert 41483 not in tags assert 'FlashEnergy' not in tags tags.add(37387, 'FlashEnergy') tags.add(41483, 'FlashEnergy') assert 37387 in tags assert 41483 in tags assert 'FlashEnergy' in tags assert "37387, 'FlashEnergy'" in info assert "41483, 'FlashEnergy'" in info @pytest.mark.parametrize( 'shape, storedshape, dtype, axes, error', [ # separate and contig ((32, 32), (1, 2, 1, 32, 32, 2), numpy.uint8, None, ValueError), # depth ((32, 32, 32), (1, 1, 32, 32, 32, 1), numpy.uint8, None, OmeXmlError), # dtype ((32, 32), (1, 1, 1, 32, 32, 1), numpy.float16, None, OmeXmlError), # empty ((0, 0), (1, 1, 1, 0, 0, 1), numpy.uint8, None, OmeXmlError), # not YX ((32, 32), (1, 1, 1, 32, 32, 1), numpy.uint8, 'XZ', OmeXmlError), # unknown axis ((1, 32, 32), (1, 1, 1, 32, 32, 1), numpy.uint8, 'KYX', OmeXmlError), # double axis ((1, 32, 32), (1, 1, 1, 32, 32, 1), numpy.uint8, 'YYX', OmeXmlError), # more than 5 dimensions ( (1, 1, 1, 5, 32, 32), (5, 1, 1, 32, 32, 1), numpy.uint8, None, OmeXmlError, ), # more than 6 dimensions ( (1, 1, 1, 1, 32, 32, 3), (1, 1, 1, 32, 32, 3), numpy.uint8, None, OmeXmlError, ), # more than 8 dimensions ( (1, 1, 1, 1, 1, 1, 1, 32, 32), (1, 1, 1, 32, 32, 1), numpy.uint8, 'ARHETZCYX', OmeXmlError, ), # more than 9 dimensions ( (1, 1, 1, 1, 1, 1, 1, 32, 32, 3), (1, 1, 1, 32, 32, 3), numpy.uint8, 'ARHETZCYXS', OmeXmlError, ), # double axis ((1, 32, 32), (1, 1, 1, 32, 32, 1), numpy.uint8, 'YYX', OmeXmlError), # planecount mismatch ((3, 32, 32), (1, 1, 1, 32, 32, 1), numpy.uint8, 'CYX', ValueError), # stored shape mismatch ((3, 32, 32), (1, 2, 1, 32, 32, 1), numpy.uint8, 'SYX', ValueError), ((32, 32, 3), (1, 1, 1, 32, 32, 2), numpy.uint8, 'YXS', ValueError), ((3, 32, 32), (1, 3, 1, 31, 31, 1), numpy.uint8, 'SYX', ValueError), ((32, 32, 3), (1, 1, 1, 31, 31, 3), numpy.uint8, 'YXS', ValueError), ((32, 32), (1, 1, 1, 32, 31, 1), numpy.uint8, None, ValueError), # too many modulo dimensions ( (2, 3, 4, 5, 32, 32), (60, 1, 1, 32, 32, 1), numpy.uint8, 'RHEQYX', OmeXmlError, ), ], ) def test_class_omexml_fail(shape, storedshape, dtype, axes, error): """Test OmeXml class failures.""" metadata = {'axes': axes} if axes else {} ox = OmeXml() with pytest.raises(error): ox.addimage(dtype, shape, storedshape, **metadata) @pytest.mark.parametrize( 'axes, autoaxes, shape, storedshape, dimorder', [ ('YX', 'YX', (32, 32), (1, 1, 1, 32, 32, 1), 'XYCZT'), ('YXS', 'YXS', (32, 32, 1), (1, 1, 1, 32, 32, 1), 'XYCZT'), ('SYX', 'SYX', (1, 32, 32), (1, 1, 1, 32, 32, 1), 'XYCZT'), ('YXS', 'YXS', (32, 32, 3), (1, 1, 1, 32, 32, 3), 'XYCZT'), ('SYX', 'SYX', (3, 32, 32), (1, 3, 1, 32, 32, 1), 'XYCZT'), ('CYX', 'CYX', (5, 32, 32), (5, 1, 1, 32, 32, 1), 'XYCZT'), ('CYXS', 'CYXS', (5, 32, 32, 1), (5, 1, 1, 32, 32, 1), 'XYCZT'), ('CSYX', 'ZCYX', (5, 1, 32, 32), (5, 1, 1, 32, 32, 1), 'XYCZT'), # ! ('CYXS', 'CYXS', (5, 32, 32, 3), (5, 1, 1, 32, 32, 3), 'XYCZT'), ('CSYX', 'CSYX', (5, 3, 32, 32), (5, 3, 1, 32, 32, 1), 'XYCZT'), ('TZCYX', 'TZCYX', (3, 4, 5, 32, 32), (60, 1, 1, 32, 32, 1), 'XYCZT'), ( 'TZCYXS', 'TZCYXS', (3, 4, 5, 32, 32, 1), (60, 1, 1, 32, 32, 1), 'XYCZT', ), ( 'TZCSYX', 'TZCSYX', (3, 4, 5, 1, 32, 32), (60, 1, 1, 32, 32, 1), 'XYCZT', ), ( 'TZCYXS', 'TZCYXS', (3, 4, 5, 32, 32, 3), (60, 1, 1, 32, 32, 3), 'XYCZT', ), ('ZTCSYX', '', (3, 4, 5, 3, 32, 32), (60, 3, 1, 32, 32, 1), 'XYCTZ'), ], ) @pytest.mark.parametrize('metadata', ('axes', None)) def test_class_omexml(axes, autoaxes, shape, storedshape, dimorder, metadata): """Test OmeXml class.""" dtype = numpy.uint8 if not metadata and dimorder != 'XYCZT': pytest.xfail('') metadata = dict(axes=axes) if metadata else dict() omexml = OmeXml() omexml.addimage(dtype, shape, storedshape, **metadata) if IS_WIN: assert '\n ' in str(omexml) omexml = omexml.tostring() assert dimorder in omexml if metadata: autoaxes = axes for ax in 'XYCZT': if ax in autoaxes: size = shape[autoaxes.index(ax)] else: size = 1 if ax == 'C': size *= storedshape[1] * storedshape[-1] assert f'Size{ax}="{size}"' in omexml assert__repr__(omexml) assert_valid_omexml(omexml) @pytest.mark.parametrize( 'axes, shape, storedshape, sizetzc, dimorder', [ ('ZAYX', (3, 4, 32, 32), (12, 1, 1, 32, 32, 1), (1, 12, 1), 'XYCZT'), ('AYX', (3, 32, 32), (3, 1, 1, 32, 32, 1), (3, 1, 1), 'XYCZT'), ('APYX', (3, 4, 32, 32), (12, 1, 1, 32, 32, 1), (3, 4, 1), 'XYCZT'), ('TAYX', (3, 4, 32, 32), (12, 1, 1, 32, 32, 1), (12, 1, 1), 'XYCZT'), ( 'CHYXS', (3, 4, 32, 32, 3), (12, 1, 1, 32, 32, 3), (1, 1, 36), 'XYCZT', ), ( 'CHSYX', (3, 4, 3, 32, 32), (12, 3, 1, 32, 32, 1), (1, 1, 36), 'XYCZT', ), ( 'APRYX', (3, 4, 5, 32, 32), (60, 1, 1, 32, 32, 1), (3, 4, 5), 'XYCZT', ), ( 'TAPYX', (3, 4, 5, 32, 32), (60, 1, 1, 32, 32, 1), (12, 5, 1), 'XYCZT', ), ( 'TZAYX', (3, 4, 5, 32, 32), (60, 1, 1, 32, 32, 1), (3, 20, 1), 'XYCZT', ), ( 'ZCHYX', (3, 4, 5, 32, 32), (60, 1, 1, 32, 32, 1), (1, 3, 20), 'XYCZT', ), ( 'EPYX', (10, 5, 200, 200), (50, 1, 1, 200, 200, 1), (10, 5, 1), 'XYCZT', ), ( 'TQCPZRYX', (2, 3, 4, 5, 6, 7, 32, 32), (5040, 1, 1, 32, 32, 1), (6, 42, 20), 'XYZCT', ), ], ) def test_class_omexml_modulo(axes, shape, storedshape, sizetzc, dimorder): """Test OmeXml class with modulo dimensions.""" dtype = numpy.uint8 omexml = OmeXml() omexml.addimage(dtype, shape, storedshape, axes=axes) assert '\n ' in str(omexml) omexml = omexml.tostring() assert dimorder in omexml for ax, size in zip('TZC', sizetzc): assert f'Size{ax}="{size}"' in omexml assert__repr__(omexml) assert_valid_omexml(omexml) def test_class_omexml_attributes(): """Test OmeXml class with attributes and elements.""" from uuid import uuid1 uuid = str(uuid1()) metadata = dict( # document UUID=uuid, Creator=f'test_tifffile.py {tifffile.__version__}', # image axes='QZYXS', Name='ImageName', Acquisitiondate='2011-09-16T10:45:48', Description='Image "Description" < & >\n{test}', TypeDescription={'Q': 'Phasor'}, SignificantBits=12, PhysicalSizeX=1.1, PhysicalSizeXUnit='nm', PhysicalSizeY=1.2, PhysicalSizeYUnit='\xb5m', PhysicalSizeZ=1.3, PhysicalSizeZUnit='\xc5', TimeIncrement=1.4, TimeIncrementUnit='\xb5s', Channel=dict(Name='ChannelName'), # one channel with 3 samples Plane=dict(PositionZ=[0.0, 0.0, 2.0, 2.0, 4.0, 4.0]), # 6 TZ-planes CommentAnnotation='Tifffile test', BooleanAnnotation=True, LongAnnotation=[1, 2], DoubleAnnotation={'Description': 'A double', 'Value': 1.0}, MapAnnotation=[ {'Description': 'description', 'key': '', 'key2': 1.0}, {'Namespace': 'ns.org', 'key2': 1, 'key3': 2}, ], ) omexml = OmeXml(**metadata) omexml.addimage( numpy.uint16, (2, 3, 32, 32, 3), (6, 1, 1, 32, 32, 3), **metadata ) xml = omexml.tostring() for value in ( f'UUID="urn:uuid:{uuid}"', 'SizeX="32" SizeY="32" SizeC="3" SizeZ="3" SizeT="2"', 'Interleaved="true" SignificantBits="12"', '', '', '', '' 'Tifffile test', '' 'true', '1' '2', '' 'A double' '1.0', '' 'description' '<str/>1.0' '', '' '12', ): assert value in xml if IS_PYPY: pytest.xfail('lxml bug?') assert__repr__(omexml) assert_valid_omexml(xml) assert '\n ' in str(omexml) with TempFileName('class_omexml_attributes') as fname: imwrite( fname, shape=(2, 3, 32, 32, 3), dtype=numpy.uint16, ome=True, photometric='rgb', metadata=metadata, ) assert imread(fname).shape == (2, 3, 32, 32, 3) def test_class_omexml_datasets(): """Test OmeXml class with datasets.""" args = numpy.uint8, (3, 7), (1, 1, 1, 3, 7, 1) kwargs = {'dtype': numpy.uint8, 'shape': (3, 7)} omexml = OmeXml() # no dataset omexml.addimage(*args, CommentAnnotation='0') # first dataset explicit omexml.addimage( *args, Dataset={ 'Name': 'First', 'Description': 'Dataset', 'CommentAnnotation': '1', }, ) # first dataset implicit omexml.addimage(*args, CommentAnnotation='2') # no dataset omexml.addimage(*args, Dataset=None, CommentAnnotation='3') # first dataset implicit omexml.addimage(*args, CommentAnnotation='4') # second dataset explicit omexml.addimage(*args, Dataset={'CommentAnnotation': '5'}) xml = omexml.tostring() for value in ( '' 'Dataset' '' '' '' '' '', '' '' '' '', '' '', ): assert value in xml if IS_PYPY: pytest.xfail('lxml bug?') assert__repr__(omexml) assert_valid_omexml(xml) assert '\n ' in str(omexml) with TempFileName('class_omexml_datasets') as fname: with TiffWriter(fname, ome=True) as tif: tif.write(**kwargs, metadata={'CommentAnnotation': '0'}) tif.write( **kwargs, metadata={ 'Dataset': { 'Name': 'First', 'Description': 'Dataset', 'CommentAnnotation': '1', } }, ) tif.write(**kwargs, metadata={'CommentAnnotation': '2'}) tif.write( **kwargs, metadata={'Dataset': None, 'CommentAnnotation': '3'} ) tif.write(**kwargs, metadata={'CommentAnnotation': '4'}) tif.write( **kwargs, metadata={'Dataset': {'CommentAnnotation': '5'}} ) with TiffFile(fname) as tif: assert len(tif.series) == 6 def test_class_omexml_multiimage(): """Test OmeXml class with multiple images.""" omexml = OmeXml(description='multiimage') omexml.addimage( numpy.uint8, (32, 32, 3), (1, 1, 1, 32, 32, 3), name='preview' ) omexml.addimage( numpy.float32, (4, 256, 256), (4, 1, 1, 256, 256, 1), name='1' ) omexml.addimage('bool', (256, 256), (1, 1, 1, 256, 256, 1), name='mask') assert '\n ' in str(omexml) omexml = omexml.tostring() assert 'TiffData IFD="0" PlaneCount="1"' in omexml assert 'TiffData IFD="1" PlaneCount="4"' in omexml assert 'TiffData IFD="5" PlaneCount="1"' in omexml assert_valid_omexml(omexml) def test_class_timer(capsys): """Test Timer class.""" started = Timer.clock() with Timer('test_class_timer', started=started) as timer: assert timer.started == started captured = capsys.readouterr() assert captured.out == 'test_class_timer ' duration = timer.stop() assert timer.duration == duration assert timer.stopped == started + duration timer.duration = 314159.265359 assert__repr__(timer) captured = capsys.readouterr() assert captured.out == '3 days, 15:15:59.265359 s\n' def test_func_xml2dict(): """Test xml2dict function.""" xml = """ -1 -1,2 -3.14 1.0, -2.0 True Lorem, Ipsum """ d = xml2dict(xml)['root'] assert d['attr'] == 'attribute' assert d['int'] == -1 assert d['ints'] == (-1, 2) assert d['float'] == -3.14 assert d['floats'] == (1.0, -2.0) assert d['bool'] is True assert d['string'] == 'Lorem, Ipsum' d = xml2dict(xml, prefix=('a_', 'b_'), sep='')['root'] assert d['ints'] == '-1,2' assert d['floats'] == '1.0, -2.0' def test_func_memmap(): """Test memmap function.""" with TempFileName('func_memmap_new') as fname: # create new file im = memmap( fname, shape=(32, 16), dtype=numpy.float32, bigtiff=True, compression=False, ) im[31, 15] = 1.0 im.flush() assert im.shape == (32, 16) assert im.dtype == numpy.float32 del im im = memmap(fname, page=0, mode='r') assert im[31, 15] == 1.0 del im im = memmap(fname, series=0, mode='c') assert im[31, 15] == 1.0 del im # append to file im = memmap( fname, shape=(3, 64, 64), dtype=numpy.uint16, append=True, photometric=PHOTOMETRIC.MINISBLACK, ) im[2, 63, 63] = 1.0 im.flush() assert im.shape == (3, 64, 64) assert im.dtype == numpy.uint16 del im im = memmap(fname, page=3, mode='r') assert im[63, 63] == 1 del im im = memmap(fname, series=1, mode='c') assert im[2, 63, 63] == 1 del im # can not memory-map compressed array with pytest.raises(ValueError): memmap( fname, shape=(16, 16), dtype=numpy.float32, append=True, compression=COMPRESSION.ADOBE_DEFLATE, ) @pytest.mark.skipif(IS_BE, reason=REASON) @pytest.mark.parametrize('byteorder', ['>', '<']) def test_func_memmap_byteorder(byteorder): """Test non-native byteorder can be memory mapped.""" bo = {'<': 'le', '>': 'be'}[byteorder] with TempFileName(f'func_memmap_byteorder_{bo}') as fname: if os.path.exists(fname): os.remove(fname) # new, empty file im = memmap( fname, shape=(16, 16), dtype=numpy.uint16, byteorder=byteorder ) assert im.dtype.byteorder == byteorder im[0, 0] = 253 im[1, 1] = 257 del im with TiffFile(fname) as tif: assert tif.byteorder == byteorder im = tif.asarray() assert im.dtype.byteorder in {'<', '='} # native assert im[0, 0] == 253 assert im[1, 1] == 257 im = memmap(fname, mode='r') assert im.dtype.byteorder == '=' if byteorder == '<' else '>' assert im[0, 0] == 253 assert im[1, 1] == 257 del im def test_func_repeat_nd(): """Test repeat_nd function.""" a = repeat_nd([[0, 1, 2], [3, 4, 5], [6, 7, 8]], (2, 3)) assert_array_equal( a, [ [0, 0, 0, 1, 1, 1, 2, 2, 2], [0, 0, 0, 1, 1, 1, 2, 2, 2], [3, 3, 3, 4, 4, 4, 5, 5, 5], [3, 3, 3, 4, 4, 4, 5, 5, 5], [6, 6, 6, 7, 7, 7, 8, 8, 8], [6, 6, 6, 7, 7, 7, 8, 8, 8], ], ) def test_func_byteorder_isnative(): """Test byteorder_isnative function.""" assert byteorder_isnative(sys.byteorder) assert byteorder_isnative('=') if sys.byteorder == 'little': assert byteorder_isnative('<') assert not byteorder_isnative('>') else: assert byteorder_isnative('>') assert not byteorder_isnative('<') def test_func_byteorder_compare(): """Test byteorder_isnative function.""" assert byteorder_compare('<', '<') assert byteorder_compare('>', '>') assert byteorder_compare('=', '=') assert byteorder_compare('|', '|') assert byteorder_compare('>', '|') assert byteorder_compare('<', '|') assert byteorder_compare('|', '>') assert byteorder_compare('|', '<') assert byteorder_compare('=', '|') assert byteorder_compare('|', '=') if sys.byteorder == 'little': assert byteorder_compare('<', '=') else: assert byteorder_compare('>', '=') def test_func_reshape_nd(): """Test reshape_nd function.""" assert reshape_nd(numpy.empty(0), 2).shape == (1, 0) assert reshape_nd(numpy.empty(1), 3).shape == (1, 1, 1) assert reshape_nd(numpy.empty((2, 3)), 3).shape == (1, 2, 3) assert reshape_nd(numpy.empty((2, 3, 4)), 3).shape == (2, 3, 4) assert reshape_nd((0,), 2) == (1, 0) assert reshape_nd((1,), 3) == (1, 1, 1) assert reshape_nd((2, 3), 3) == (1, 2, 3) assert reshape_nd((2, 3, 4), 3) == (2, 3, 4) def test_func_order_axes(): """Test axis_order function.""" axes = [(0, 2, 0), (1, 2, 0), (0, 2, 1), (1, 2, 1)] # first axis varies fastest, second axis is constant assert order_axes(axes, True) == (2, 0) assert order_axes(axes, False) == (1, 2, 0) def test_func_apply_colormap(): """Test apply_colormap function.""" image = numpy.arange(256, dtype=numpy.uint8) colormap = numpy.vstack([image, image, image]).astype(numpy.uint16) * 256 assert_array_equal(apply_colormap(image, colormap)[-1], colormap[:, -1]) def test_func_parse_filenames(): """Test parse_filenames function.""" def func(*args, **kwargs): labels, shape, indices, _ = parse_filenames(*args, **kwargs) return ''.join(labels), shape, indices files = ['c1t001.ext', 'c1t002.ext', 'c2t002.ext'] # 'c2t001.ext' missing # group names p = r'(?P\d).[!\d](?P\d+)\.ext' assert func(files[:1], p) == ('ab', (1, 1), [(0, 0)]) # (1, 1) assert func(files[:2], p) == ('ab', (1, 2), [(0, 0), (0, 1)]) # (1, 1) assert func(files, p) == ('ab', (2, 2), [(0, 0), (0, 1), (1, 1)]) # (1, 1) # unknown axes p = r'(\d)[^\d](\d+)\.ext' assert func(files[:1], p) == ('QQ', (1, 1), [(0, 0)]) # (1, 1) assert func(files[:2], p) == ('QQ', (1, 2), [(0, 0), (0, 1)]) # (1, 1) assert func(files, p) == ('QQ', (2, 2), [(0, 0), (0, 1), (1, 1)]) # (1, 1) # match axes p = r'([^\d])(\d)([^\d])(\d+)\.ext' assert func(files[:1], p) == ('ct', (1, 1), [(0, 0)]) # (1, 1) assert func(files[:2], p) == ('ct', (1, 2), [(0, 0), (0, 1)]) # (1, 1) assert func(files, p) == ('ct', (2, 2), [(0, 0), (0, 1), (1, 1)]) # (1, 1) # misc files = ['c0t001.ext', 'c0t002.ext', 'c2t002.ext'] # 'c2t001.ext' missing p = r'([^\d])(\d)[^\d](?P\d+)\.ext' assert func(files[:1], p) == ('cb', (1, 1), [(0, 0)]) # (0, 1) assert func(files[:2], p) == ('cb', (1, 2), [(0, 0), (0, 1)]) # (0, 1) assert func(files, p) == ('cb', (3, 2), [(0, 0), (0, 1), (2, 1)]) # (0, 1) # BBBC006_v1 categories = {'p': {chr(i + 97): i for i in range(25)}} files = [ 'BBBC006_v1_images_z_00/mcf-z-stacks-03212011_a01_s1_w1a57.tif', 'BBBC006_v1_images_z_00/mcf-z-stacks-03212011_a03_s2_w1419.tif', 'BBBC006_v1_images_z_00/mcf-z-stacks-03212011_p24_s2_w2283.tif', 'BBBC006_v1_images_z_01/mcf-z-stacks-03212011_p24_s2_w11cf.tif', ] # don't match directory p = r'_(?P

[a-z])(?P\d+)(?:_(s)(\d))(?:_(w)(\d))' assert func(files[:1], p, categories=categories) == ( 'pasw', (1, 1, 1, 1), [(0, 0, 0, 0)], # (97, 1, 1, 1), ) assert func(files[:2], p, categories=categories) == ( 'pasw', (1, 3, 2, 1), [(0, 0, 0, 0), (0, 2, 1, 0)], # (97, 1, 1, 1), ) # match directory p = r'(?:_(z)_(\d+)).*_(?P

[a-z])(?P\d+)(?:_(s)(\d))(?:_(w)(\d))' assert func(files, p, categories=categories) == ( 'zpasw', (2, 16, 24, 2, 2), [ (0, 0, 0, 0, 0), (0, 0, 2, 1, 0), (0, 15, 23, 1, 1), (1, 15, 23, 1, 0), ], # (0, 97, 1, 1, 1), ) # reorder axes p = r'(?:_(z)_(\d+)).*_(?P

[a-z])(?P\d+)(?:_(s)(\d))(?:_(w)(\d))' assert func( files, p, axesorder=(2, 0, 1, 3, 4), categories=categories ) == ( 'azpsw', (24, 2, 16, 2, 2), [ (0, 0, 0, 0, 0), (2, 0, 0, 1, 0), (23, 0, 15, 1, 1), (23, 1, 15, 1, 0), ], # (1, 0, 97, 1, 1), ) def test_func_reshape_axes(): """Test reshape_axes function.""" assert reshape_axes('YXS', (219, 301, 1), (219, 301, 1)) == 'YXS' assert reshape_axes('YXS', (219, 301, 3), (219, 301, 3)) == 'YXS' assert reshape_axes('YXS', (219, 301, 1), (219, 301)) == 'YX' assert reshape_axes('YXS', (219, 301, 1), (219, 1, 1, 301, 1)) == 'YQQXS' assert reshape_axes('IYX', (12, 219, 301), (3, 4, 219, 301, 1)) == 'QQYXQ' assert ( reshape_axes('IYX', (12, 219, 301), (3, 4, 219, 1, 301, 1)) == 'QQYQXQ' ) assert ( reshape_axes('IYX', (12, 219, 301), (3, 2, 219, 2, 301, 1)) == 'QQQQXQ' ) with pytest.raises(ValueError): reshape_axes('IYX', (12, 219, 301), (3, 4, 219, 2, 301, 1)) with pytest.raises(ValueError): reshape_axes('IYX', (12, 219, 301), (3, 4, 219, 301, 2)) def test_func_julian_datetime(): """Test julian_datetime function.""" assert julian_datetime(2451576, 54362783) == ( datetime.datetime(2000, 2, 2, 15, 6, 2, 783 * 1000) ) def test_func_excel_datetime(): """Test excel_datetime function.""" assert excel_datetime(40237.029999999795) == ( datetime.datetime(2010, 2, 28, 0, 43, 11, 999982) ) def test_func_natural_sorted(): """Test natural_sorted function.""" assert natural_sorted(['f1', 'f2', 'f10']) == ['f1', 'f2', 'f10'] def test_func_stripnull(): """Test stripnull function.""" # with pytest.warns(DeprecationWarning): assert stripnull(b'string\x00') == b'string' assert stripnull(b'string\x00x') == b'string' assert stripnull('string\x00', null='\0') == 'string' assert ( stripnull(b'string\x00string\x00\x00', first=False) == b'string\x00string' ) assert ( stripnull('string\x00string\x00\x00', null='\0', first=False) == 'string\x00string' ) def test_func_stripascii(): """Test stripascii function.""" assert stripascii(b'string\x00string\n\x01\x00') == b'string\x00string\n' assert stripascii(b'\x00') == b'' def test_func_sequence(): """Test sequence function.""" assert sequence(1) == (1,) assert sequence([1]) == [1] def test_func_product(): """Test product function.""" assert product([2**8, 2**30]) == 274877906944 assert product([]) == 1 assert product(numpy.array([2**8, 2**30], numpy.int32)) == 274877906944 def test_func_squeeze_axes(): """Test squeeze_axes function.""" assert squeeze_axes((5, 1, 2, 1, 1), 'TZYXC') == ( (5, 2, 1), 'TYX', (True, False, True, True, False), ) assert squeeze_axes((1,), 'Y') == ((1,), 'Y', (True,)) assert squeeze_axes((1,), 'Q') == ((1,), 'Q', (True,)) assert squeeze_axes((1, 1), 'PQ') == ((1,), 'Q', (False, True)) def test_func_transpose_axes(): """Test transpose_axes function.""" assert transpose_axes( numpy.zeros((2, 3, 4, 5)), 'TYXC', asaxes='CTZYX' ).shape == (5, 2, 1, 3, 4) def test_func_subresolution(): """Test subresolution function.""" class a: dtype = numpy.uint8 axes = 'QzyxS' shape = (3, 256, 512, 1024, 4) class b: dtype = numpy.uint8 axes = 'QzyxS' shape = (3, 128, 256, 512, 4) assert subresolution(a, a) == 0 assert subresolution(a, b) == 1 assert subresolution(a, b, p=2, n=2) == 1 assert subresolution(a, b, p=3) is None b.shape = (3, 86, 171, 342, 4) assert subresolution(a, b, p=3) == 1 b.shape = (3, 128, 256, 512, 2) assert subresolution(a, b) is None b.shape = (3, 64, 256, 512, 4) assert subresolution(a, b) is None b.shape = (3, 128, 64, 512, 4) assert subresolution(a, b) is None b.shape = (3, 128, 256, 1024, 4) assert subresolution(a, b) is None b.shape = (3, 32, 64, 128, 4) assert subresolution(a, b) == 3 @pytest.mark.skipif(IS_BE, reason=REASON) def test_func_unpack_rgb(): """Test unpack_rgb function.""" data = struct.pack('BBBB', 0x21, 0x08, 0xFF, 0xFF) assert_array_equal( unpack_rgb(data, ' Unknown = unknown % Comment """ ) assert p['Array'] == [1, 2] assert p['Array.2D'] == [[1], [2]] assert p['Array.Empty'] == [] assert p['Cell'] == ['', ''] assert p['Class'] == '@class' assert p['False'] is False assert p['Filename'] == 'C:\\Users\\scanimage.cfg' assert p['Float'] == 3.14 assert p['Float.E'] == 3.14 assert p['Float.Inf'] == float('inf') # self.assertEqual(p['Float.NaN'], float('nan')) # can't compare NaN assert p['Int'] == 10 assert p['StructObject'] == '' assert p['Ones'] == [[]] assert p['String'] == 'string' assert p['String.Array'] == 'ab' assert p['String.Empty'] == '' assert p['Transform'] == [[1, 0, 0], [0, 1, 0], [0, 0, 1]] assert p['True'] is True assert p['Unknown'] == 'unknown' assert p['Zeros'] == [[0.0]] assert p['Zeros.Empty'] == [[]] assert p['false'] is False assert p['true'] is True def test_func_strptime(): """Test strptime function.""" now = datetime.datetime.now().replace(microsecond=0) assert strptime(now.isoformat()) == now assert strptime(now.strftime('%Y:%m:%d %H:%M:%S')) == now assert strptime(now.strftime('%Y%m%d %H:%M:%S.%f')) == now def test_func_hexdump(): """Test hexdump function.""" # test hexdump function data = binascii.unhexlify( '49492a00080000000e00fe0004000100' '00000000000000010400010000000001' '00000101040001000000000100000201' '030001000000200000000301030001' ) # one line assert hexdump(data[:16]) == ( '49 49 2a 00 08 00 00 00 0e 00 fe 00 04 00 01 00 II*.............' ) # height=1 assert hexdump(data, width=64, height=1) == ( '49 49 2a 00 08 00 00 00 0e 00 fe 00 04 00 01 00 II*.............' ) # all lines assert hexdump(data) == ( '00: 49 49 2a 00 08 00 00 00 0e 00 fe 00 04 00 01 00 ' 'II*.............\n' '10: 00 00 00 00 00 00 00 01 04 00 01 00 00 00 00 01 ' '................\n' '20: 00 00 01 01 04 00 01 00 00 00 00 01 00 00 02 01 ' '................\n' '30: 03 00 01 00 00 00 20 00 00 00 03 01 03 00 01 ' '...... ........' ) # skip center assert hexdump(data, height=3, snipat=0.5) == ( '00: 49 49 2a 00 08 00 00 00 0e 00 fe 00 04 00 01 00 ' 'II*.............\n' ' ...\n' '30: 03 00 01 00 00 00 20 00 00 00 03 01 03 00 01 ' '...... ........' ) # skip start assert hexdump(data, height=3, snipat=0) == ( '10: 00 00 00 00 00 00 00 01 04 00 01 00 00 00 00 01 ' '................\n' '20: 00 00 01 01 04 00 01 00 00 00 00 01 00 00 02 01 ' '................\n' '30: 03 00 01 00 00 00 20 00 00 00 03 01 03 00 01 ' '...... ........' ) # skip end assert hexdump(data, height=3, snipat=1) == ( '00: 49 49 2a 00 08 00 00 00 0e 00 fe 00 04 00 01 00 ' 'II*.............\n' '10: 00 00 00 00 00 00 00 01 04 00 01 00 00 00 00 01 ' '................\n' '20: 00 00 01 01 04 00 01 00 00 00 00 01 00 00 02 01 ' '................' ) def test_func_asbool(): """Test asbool function.""" for true in ('TRUE', ' True ', 'true '): assert asbool(true) assert asbool(true.encode()) for false in ('FALSE', ' False ', 'false '): assert not asbool(false) assert not asbool(false.encode()) assert asbool('ON', ['on'], ['off']) assert asbool('ON', 'on', 'off') with pytest.raises(TypeError): assert asbool('Yes') with pytest.raises(TypeError): assert asbool('True', ['on'], ['off']) def test_func_snipstr(): """Test snipstr function.""" # cut middle assert snipstr('abc', 3, ellipsis='...') == 'abc' assert snipstr('abc', 3, ellipsis='....') == 'abc' assert snipstr('abcdefg', 4, ellipsis='') == 'abcd' assert snipstr('abcdefg', 4, ellipsis=None) == 'abc…' assert snipstr(b'abcdefg', 4, ellipsis=None) == b'a...' assert snipstr('abcdefghijklmnop', 8, ellipsis=None) == 'abcd…nop' assert snipstr(b'abcdefghijklmnop', 8, ellipsis=None) == b'abc...op' assert snipstr('abcdefghijklmnop', 9, ellipsis=None) == 'abcd…mnop' assert snipstr(b'abcdefghijklmnop', 9, ellipsis=None) == b'abc...nop' assert snipstr('abcdefghijklmnop', 8, ellipsis='..') == 'abc..nop' assert snipstr('abcdefghijklmnop', 8, ellipsis='....') == 'ab....op' assert snipstr('abcdefghijklmnop', 8, ellipsis='......') == 'ab......' # cut right assert snipstr('abc', 3, snipat=1, ellipsis='...') == 'abc' assert snipstr('abc', 3, snipat=1, ellipsis='....') == 'abc' assert snipstr('abcdefg', 4, snipat=1, ellipsis='') == 'abcd' assert snipstr('abcdefg', 4, snipat=1, ellipsis=None) == 'abc…' assert snipstr(b'abcdefg', 4, snipat=1, ellipsis=None) == b'a...' assert ( snipstr('abcdefghijklmnop', 8, snipat=1, ellipsis=None) == 'abcdefg…' ) assert ( snipstr(b'abcdefghijklmnop', 8, snipat=1, ellipsis=None) == b'abcde...' ) assert ( snipstr('abcdefghijklmnop', 9, snipat=1, ellipsis=None) == 'abcdefgh…' ) assert ( snipstr(b'abcdefghijklmnop', 9, snipat=1, ellipsis=None) == b'abcdef...' ) assert ( snipstr('abcdefghijklmnop', 8, snipat=1, ellipsis='..') == 'abcdef..' ) assert ( snipstr('abcdefghijklmnop', 8, snipat=1, ellipsis='....') == 'abcd....' ) assert ( snipstr('abcdefghijklmnop', 8, snipat=1, ellipsis='......') == 'ab......' ) # cut left assert snipstr('abc', 3, snipat=0, ellipsis='...') == 'abc' assert snipstr('abc', 3, snipat=0, ellipsis='....') == 'abc' assert snipstr('abcdefg', 4, snipat=0, ellipsis='') == 'defg' assert snipstr('abcdefg', 4, snipat=0, ellipsis=None) == '…efg' assert snipstr(b'abcdefg', 4, snipat=0, ellipsis=None) == b'...g' assert ( snipstr('abcdefghijklmnop', 8, snipat=0, ellipsis=None) == '…jklmnop' ) assert ( snipstr(b'abcdefghijklmnop', 8, snipat=0, ellipsis=None) == b'...lmnop' ) assert ( snipstr('abcdefghijklmnop', 9, snipat=0, ellipsis=None) == '…ijklmnop' ) assert ( snipstr(b'abcdefghijklmnop', 9, snipat=0, ellipsis=None) == b'...klmnop' ) assert ( snipstr('abcdefghijklmnop', 8, snipat=0, ellipsis='..') == '..klmnop' ) assert ( snipstr('abcdefghijklmnop', 8, snipat=0, ellipsis='....') == '....mnop' ) assert ( snipstr('abcdefghijklmnop', 8, snipat=0, ellipsis='......') == '......op' ) def test_func_pformat_printable_bytes(): """Test pformat function with printable bytes.""" value = ( b'0123456789abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRST' b'UVWXYZ!"#$%&\'()*+,-./:;<=>?@[\\]^_`{|}~ \t\n\r\x0b\x0c' ) assert pformat(value, height=1, width=60, linewidth=None) == ( '0123456789abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWX' ) assert ( pformat(value, height=8, width=60, linewidth=None) == r""" 0123456789abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWX """.strip() ) # YZ!"#$%&'()*+,-./:;<=>?@[\]^_`{|}~ def test_func_pformat_printable_unicode(): """Test pformat function with printable unicode.""" value = ( '0123456789abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRST' 'UVWXYZ!"#$%&\'()*+,-./:;<=>?@[\\]^_`{|}~ \t\n\r\x0b\x0c' ) assert pformat(value, height=1, width=60, linewidth=None) == ( '0123456789abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWX' ) assert ( pformat(value, height=8, width=60, linewidth=None) == r""" 0123456789abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWX """.strip() ) # YZ!"#$%&'()*+,-./:;<=>?@[\]^_`{|}~ def test_func_pformat_hexdump(): """Test pformat function with unprintable bytes.""" value = binascii.unhexlify( '49492a00080000000e00fe0004000100' '00000000000000010400010000000001' '00000101040001000000000100000201' '03000100000020000000030103000100' ) assert pformat(value, height=1, width=60, linewidth=None) == ( '49 49 2a 00 08 00 00 00 0e 00 fe 00 04 00 01 II*............' ) assert ( pformat(value, height=8, width=70, linewidth=None) == """ 00: 49 49 2a 00 08 00 00 00 0e 00 fe 00 04 00 01 00 II*............. 10: 00 00 00 00 00 00 00 01 04 00 01 00 00 00 00 01 ................ 20: 00 00 01 01 04 00 01 00 00 00 00 01 00 00 02 01 ................ 30: 03 00 01 00 00 00 20 00 00 00 03 01 03 00 01 00 ...... ......... """.strip() ) def test_func_pformat_dict(): """Test pformat function with dict.""" value = { 'GTCitationGeoKey': 'WGS 84 / UTM zone 29N', 'GTModelTypeGeoKey': 1, 'GTRasterTypeGeoKey': 1, 'KeyDirectoryVersion': 1, 'KeyRevision': 1, 'KeyRevisionMinor': 2, 'ModelTransformation': numpy.array( [ [6.00000e01, 0.00000e00, 0.00000e00, 6.00000e05], [0.00000e00, -6.00000e01, 0.00000e00, 5.90004e06], [0.00000e00, 0.00000e00, 0.00000e00, 0.00000e00], [0.00000e00, 0.00000e00, 0.00000e00, 1.00000e00], ] ), 'PCSCitationGeoKey': 'WGS 84 / UTM zone 29N', 'ProjectedCSTypeGeoKey': 32629, } assert pformat(value, height=1, width=60, linewidth=None) == ( "{'GTCitationGeoKey': 'WGS 84 / UTM zone 29N', 'GTModelTypeGe" ) assert pformat(value, height=8, width=60, linewidth=None) == ( """{'GTCitationGeoKey': 'WGS 84 / UTM zone 29N', 'GTModelTypeGeoKey': 1, 'GTRasterTypeGeoKey': 1, 'KeyDirectoryVersion': 1, ... [ 0., 0., 0., 0.], [ 0., 0., 0., 1.]]), 'PCSCitationGeoKey': 'WGS 84 / UTM zone 29N', 'ProjectedCSTypeGeoKey': 32629}""" ) def test_func_pformat_list(): """Test pformat function with list.""" value = ( 60.0, 0.0, 0.0, 600000.0, 0.0, -60.0, 0.0, 5900040.0, 60.0, 0.0, 0.0, 600000.0, 0.0, -60.0, 0.0, 5900040.0, ) assert pformat(value, height=1, width=60, linewidth=None) == ( '(60.0, 0.0, 0.0, 600000.0, 0.0, -60.0, 0.0, 5900040.0, 60.0,' ) assert pformat(value, height=8, width=60, linewidth=None) == ( '(60.0, 0.0, 0.0, 600000.0, 0.0, -60.0, 0.0, 5900040.0, 60.0,\n' ' 0.0, 0.0, 600000.0, 0.0, -60.0, 0.0, 5900040.0)' ) def test_func_pformat_numpy(): """Test pformat function with numpy array.""" value = numpy.array( ( 60.0, 0.0, 0.0, 600000.0, 0.0, -60.0, 0.0, 5900040.0, 60.0, 0.0, 0.0, 600000.0, 0.0, -60.0, 0.0, 5900040.0, ) ) assert pformat(value, height=1, width=60, linewidth=None) == ( 'array([ 60., 0., 0., 600000., 0., -60., 0., 5900040., 60., 0' ) assert pformat(value, height=8, width=60, linewidth=None) == ( """array([ 60., 0., 0., 600000., 0., -60., 0., 5900040., 60., 0., 0., 600000., 0., -60., 0., 5900040.])""" ) @pytest.mark.skipif(not IS_WIN, reason='not reliable on Linux') def test_func_pformat_xml(): """Test pformat function with XML.""" value = """ DIMAP BEAM-DATAMODEL-V1 0 """ assert pformat(value, height=1, width=60, linewidth=None) == ( ' DIMAP ... 0 """ ) @pytest.mark.skipif(SKIP_PRIVATE or SKIP_LARGE, reason=REASON) def test_func_lsm2bin(): """Test lsm2bin function.""" # Convert LSM to BIN fname = private_file( 'lsm/Twoareas_Zstacks54slices_3umintervals_5cycles.lsm' ) # fname = private_file( # 'LSM/fish01-wt-t01-10_ForTest-20zplanes10timepoints.lsm') lsm2bin(fname, '', verbose=True) def test_func_tiffcomment(): """Test tiffcomment function.""" data = random_data(numpy.uint8, (33, 31, 3)) with TempFileName('func_tiffcomment') as fname: comment = 'A comment' imwrite( fname, data, photometric=PHOTOMETRIC.RGB, description=comment, metadata=None, ) assert comment == tiffcomment(fname) comment = 'changed comment' tiffcomment(fname, comment) assert comment == tiffcomment(fname) assert_valid_tiff(fname) def test_func_create_output(): """Test create_output function.""" shape = (16, 17) dtype = numpy.uint16 # None a = create_output(None, shape, dtype) assert_array_equal(a, numpy.zeros(shape, dtype)) # existing array b = create_output(a, a.shape, a.dtype) assert a is b.base # 'memmap' a = create_output('memmap', shape, dtype) assert isinstance(a, numpy.memmap) del a # 'memmap:tempdir' a = create_output(f'memmap:{os.path.abspath(TEMP_DIR)}', shape, dtype) assert isinstance(a, numpy.memmap) del a # filename with TempFileName('nopages') as fname: a = create_output(fname, shape, dtype) del a def test_func_reorient(): """Test reoirient func.""" data = numpy.zeros((2, 3, 31, 33, 3), numpy.uint8) for orientation in range(1, 9): reorient(data, orientation) # TODO: assert result @pytest.mark.parametrize('key', [None, 0, 3, 'series']) @pytest.mark.parametrize('out', [None, 'empty', 'memmap', 'dir', 'name']) def test_func_create_output_asarray(out, key): """Test create_output function in context of asarray.""" data = random_data(numpy.uint16, (5, 219, 301)) with TempFileName(f'func_out_{key}_{out}') as fname: imwrite(fname, data) # assert file with TiffFile(fname) as tif: tif.pages.useframes = True tif.pages._load() if key is None: # default obj = tif dat = data elif key == 'series': # series obj = tif.series[0] dat = data else: # single page/frame obj = tif.pages[key] dat = data[key] if key == 0: assert isinstance(obj, TiffPage) else: assert isinstance(obj, TiffFrame) if out is None: # new array image = obj.asarray(out=None) assert_array_equal(dat, image) del image elif out == 'empty': # existing array image = numpy.empty_like(dat) obj.asarray(out=image) assert_array_equal(dat, image) del image elif out == 'memmap': # memmap in temp dir image = obj.asarray(out='memmap') assert isinstance(image, numpy.memmap) assert_array_equal(dat, image) del image elif out == 'dir': # memmap in specified dir tempdir = os.path.dirname(fname) image = obj.asarray(out=f'memmap:{tempdir}') assert isinstance(image, numpy.memmap) assert_array_equal(dat, image) del image elif out == 'name': # memmap in specified file with TempFileName( f'out_{key}_{out}', ext='.memmap' ) as fileout: image = obj.asarray(out=fileout) assert isinstance(image, numpy.memmap) assert_array_equal(dat, image) del image def test_func_bitorder_decode(): """Test bitorder_decode function.""" from tifffile._imagecodecs import bitorder_decode # bytes assert bitorder_decode(b'\x01\x64') == b'\x80&' assert bitorder_decode(b'\x01\x00\x9a\x02') == b'\x80\x00Y@' # numpy array data = numpy.array([1, 666], dtype=numpy.uint16) reverse = numpy.array([128, 16473], dtype=numpy.uint16) # return new array assert_array_equal(bitorder_decode(data), reverse) # array view not supported data = numpy.array( [ [1, 666, 1431655765, 62], [2, 667, 2863311530, 32], [3, 668, 1431655765, 30], ], dtype=numpy.uint32, ) reverse = numpy.array( [ [1, 666, 1431655765, 62], [2, 16601, 1431655765, 32], [3, 16441, 2863311530, 30], ], dtype=numpy.uint32, ) # if int(numpy.__version__.split('.')[1]) < 23: # with pytest.raises(NotImplementedError): # bitorder_decode(data[1:, 1:3]) # else: assert_array_equal(bitorder_decode(data[1:, 1:3]), reverse[1:, 1:3]) @pytest.mark.parametrize( 'kind', ['u1', 'u2', 'u4', 'u8', 'i1', 'i2', 'i4', 'i8', 'f4', 'f8', 'B'], ) @pytest.mark.parametrize('byteorder', ['>', '<']) def test_func_delta_codec(byteorder, kind): """Test delta codec functions.""" from tifffile._imagecodecs import delta_decode, delta_encode # if byteorder == '>' and numpy.dtype(kind).itemsize == 1: # pytest.skip('duplicate test') if kind[0] in 'iuB': low = numpy.iinfo(kind).min high = numpy.iinfo(kind).max data = numpy.random.randint( low, high, size=33 * 31 * 3, dtype=kind ).reshape(33, 31, 3) else: # floating point if byteorder == '>': pytest.xfail('requires imagecodecs') low, high = -1e5, 1e5 data = numpy.random.randint( low, high, size=33 * 31 * 3, dtype='i4' ).reshape(33, 31, 3) data = data.astype(byteorder + kind) data[16, 14] = [0, 0, 0] data[16, 15] = [low, high, low] data[16, 16] = [high, low, high] data[16, 17] = [low, high, low] data[16, 18] = [high, low, high] data[16, 19] = [0, 0, 0] if kind == 'B': # data = data.reshape(-1) data = data.tobytes() assert delta_decode(delta_encode(data)) == data else: encoded = delta_encode(data, axis=-2) assert encoded.dtype.byteorder == data.dtype.byteorder assert_array_equal(data, delta_decode(encoded, axis=-2)) if not SKIP_CODECS: assert_array_equal( encoded, imagecodecs.delta_encode(data, axis=-2) ) @pytest.mark.parametrize('length', [0, 2, 31 * 33 * 3]) @pytest.mark.parametrize('codec', ['lzma', 'zlib']) def test_func_zlib_lzma_codecs(codec, length): """Test zlib and lzma codec functions.""" if codec == 'zlib': from tifffile._imagecodecs import zlib_decode, zlib_encode encode = zlib_encode decode = zlib_decode elif codec == 'lzma': from tifffile._imagecodecs import lzma_decode, lzma_encode encode = lzma_encode decode = lzma_decode if length: data = numpy.random.randint(255, size=length, dtype=numpy.uint8) assert decode(encode(data)) == data.tobytes() else: data = b'' assert decode(encode(data)) == data PACKBITS_DATA = [ ([], b''), ([0] * 1, b'\x00\x00'), # literal ([0] * 2, b'\xff\x00'), # replicate ([0] * 3, b'\xfe\x00'), ([0] * 64, b'\xc1\x00'), ([0] * 127, b'\x82\x00'), ([0] * 128, b'\x81\x00'), # max replicate ([0] * 129, b'\x81\x00\x00\x00'), ([0] * 130, b'\x81\x00\xff\x00'), ([0] * 128 * 3, b'\x81\x00' * 3), ([255] * 1, b'\x00\xff'), # literal ([255] * 2, b'\xff\xff'), # replicate ([0, 1], b'\x01\x00\x01'), ([0, 1, 2], b'\x02\x00\x01\x02'), ([0, 1] * 32, b'\x3f' + b'\x00\x01' * 32), ([0, 1] * 63 + [2], b'\x7e' + b'\x00\x01' * 63 + b'\x02'), ([0, 1] * 64, b'\x7f' + b'\x00\x01' * 64), # max literal ([0, 1] * 64 + [2], b'\x7f' + b'\x00\x01' * 64 + b'\x00\x02'), ([0, 1] * 64 * 5, (b'\x7f' + b'\x00\x01' * 64) * 5), ([0, 1, 1], b'\x00\x00\xff\x01'), # or b'\x02\x00\x01\x01' ([0] + [1] * 128, b'\x00\x00\x81\x01'), # or b'\x01\x00\x01\x82\x01' ([0] + [1] * 129, b'\x00\x00\x81\x01\x00\x01'), # b'\x01\x00\x01\x81\x01' ([0, 1] * 64 + [2] * 2, b'\x7f' + b'\x00\x01' * 64 + b'\xff\x02'), ([0, 1] * 64 + [2] * 128, b'\x7f' + b'\x00\x01' * 64 + b'\x81\x02'), ([0, 0, 1], b'\x02\x00\x00\x01'), # or b'\xff\x00\x00\x01' ([0, 0] + [1, 2] * 64, b'\xff\x00\x7f' + b'\x01\x02' * 64), ([0] * 128 + [1], b'\x81\x00\x00\x01'), ([0] * 128 + [1, 2] * 64, b'\x81\x00\x7f' + b'\x01\x02' * 64), ( b'\xaa\xaa\xaa\x80\x00\x2a\xaa\xaa\xaa\xaa\x80\x00' b'\x2a\x22\xaa\xaa\xaa\xaa\xaa\xaa\xaa\xaa\xaa\xaa', b'\xfe\xaa\x02\x80\x00\x2a\xfd\xaa\x03\x80\x00\x2a\x22\xf7\xaa', ), ] @pytest.mark.parametrize('data', range(len(PACKBITS_DATA))) def test_func_packbits_decode(data): """Test packbits_decode function.""" from tifffile._imagecodecs import packbits_decode uncompressed, compressed = PACKBITS_DATA[data] assert packbits_decode(compressed) == bytes(uncompressed) def test_func_packints_decode(): """Test packints_decode function.""" from tifffile._imagecodecs import packints_decode decoded = packints_decode(b'', 'B', 1) assert len(decoded) == 0 decoded = packints_decode(b'a', 'B', 1) assert tuple(decoded) == (0, 1, 1, 0, 0, 0, 0, 1) with pytest.raises(NotImplementedError): decoded = packints_decode(b'ab', 'B', 2) assert tuple(decoded) == (1, 2, 0, 1, 1, 2, 0, 2) with pytest.raises(NotImplementedError): decoded = packints_decode(b'abcd', 'B', 3) assert tuple(decoded) == (3, 0, 2, 6, 1, 1, 4, 3, 3, 1) def test_func_check_shape(): """Test check_shape function.""" assert check_shape((10, 10, 4), (10, 10, 4)) assert check_shape((10, 10, 4), (1, 1, 10, 10, 4)) assert not check_shape((4, 10, 10), (10, 10, 4)) assert not check_shape((10, 10, 4), (4, 10, 10)) assert not check_shape((10, 10, 4), (1, 1, 4, 10, 10)) assert check_shape((0,), (0, 0)) assert check_shape((0, 0), (0,)) assert check_shape((4,), (4,)) assert check_shape((1, 4), (4,)) assert check_shape((1, 4), (4, 1)) assert check_shape((4, 1), (4, 1)) assert check_shape((4, 1), (1, 4, 1)) assert check_shape((4, 5), (4, 5, 1)) assert check_shape((4, 5), (4, 5, 1, 1)) assert check_shape((4, 5), (1, 4, 5, 1)) assert not check_shape((1,), (0, 0)) assert not check_shape((1, 0), (1,)) assert not check_shape((4, 1), (1,)) assert not check_shape((4, 1), (2,)) assert not check_shape((4, 1), (4,)) assert not check_shape((4, 1), (2, 2)) assert not check_shape((4, 1), (2, 1)) assert not check_shape((3, 4, 5), (4, 5, 3)) ############################################################################### # Test FileHandle class FILEHANDLE_NAME = public_file('tifffile/test_FileHandle.bin') FILEHANDLE_SIZE = 7937381 FILEHANDLE_OFFSET = 333 FILEHANDLE_LENGTH = 7937381 - 666 @pytest.mark.skipif(SKIP_PUBLIC, reason=REASON) def create_filehandle_file(): """Write test_FileHandle.bin file.""" # array start 999 # array end 1254 # recarray start 2253 # recarray end 6078 # tiff start 7077 # tiff end 12821 # mm offset = 13820 # mm size = 7936382 with open(FILEHANDLE_NAME, 'wb') as fh: # buffer numpy.ones(999, dtype=numpy.uint8).tofile(fh) # array print('array start', fh.tell()) numpy.arange(255, dtype=numpy.uint8).tofile(fh) print('array end', fh.tell()) # buffer numpy.ones(999, dtype=numpy.uint8).tofile(fh) # recarray print('recarray start', fh.tell()) a = numpy.recarray( (255, 3), dtype=[('x', numpy.float32), ('y', numpy.uint8)] ) for i in range(3): a[:, i].x = numpy.arange(255, dtype=numpy.float32) a[:, i].y = numpy.arange(255, dtype=numpy.uint8) a.tofile(fh) print('recarray end', fh.tell()) # buffer numpy.ones(999, dtype=numpy.uint8).tofile(fh) # tiff print('tiff start', fh.tell()) with open('data/public/tifffile/generic_series.tif', 'rb') as tif: fh.write(tif.read()) print('tiff end', fh.tell()) # buffer numpy.ones(999, dtype=numpy.uint8).tofile(fh) # micromanager print('micromanager start', fh.tell()) with open('data/public/tifffile/micromanager.tif', 'rb') as tif: fh.write(tif.read()) print('micromanager end', fh.tell()) # buffer numpy.ones(999, dtype=numpy.uint8).tofile(fh) def assert_filehandle(fh, offset=0): """Assert filehandle can read test_FileHandle.bin.""" assert__repr__(fh) assert not fh.closed assert fh.open() is None # open an open file fh.close() # assert fh.closed assert fh.open() is None # open a closed file size = FILEHANDLE_SIZE - 2 * offset pad = 999 - offset assert fh.size == size assert fh.tell() == 0 assert fh.read(4) == b'\x01\x01\x01\x01' fh.seek(pad - 4) assert fh.tell() == pad - 4 assert fh.read(4) == b'\x01\x01\x01\x01' fh.seek(-4, whence=1) assert fh.tell() == pad - 4 assert fh.read(4) == b'\x01\x01\x01\x01' fh.seek(-pad, whence=2) assert fh.tell() == size - pad assert fh.read(4) == b'\x01\x01\x01\x01' # assert array fh.seek(pad, whence=0) assert fh.tell() == pad assert_array_equal( fh.read_array(numpy.uint8, 255), numpy.arange(255, dtype=numpy.uint8) ) # assert records fh.seek(999, whence=1) assert fh.tell() == 2253 - offset records = fh.read_record( [('x', numpy.float32), ('y', numpy.uint8)], (255, 3) ) assert_array_equal(records.y[:, 0], range(255)) assert_array_equal(records.x, records.y) # assert memmap if fh.is_file: assert_array_equal( fh.memmap_array(numpy.uint8, 255, pad), numpy.arange(255, dtype=numpy.uint8), ) @pytest.mark.skipif(SKIP_HTTP, reason=REASON) def test_filehandle_seekable(): """Test FileHandle must be seekable.""" from urllib.request import HTTPHandler, build_opener opener = build_opener(HTTPHandler()) opener.addheaders = [('User-Agent', 'test_tifffile.py')] try: fh = opener.open(URL + 'test/test_http.tif') except OSError: pytest.skip(URL + 'test/test_http.tif') with pytest.raises(ValueError): FileHandle(fh) def test_filehandle_write_bytesio(): """Test write to FileHandle from BytesIO.""" value = b'123456789' buf = BytesIO() with FileHandle(buf) as fh: fh.write(value) assert buf.getvalue() == value def test_filehandle_write_bytesio_offset(): """Test write to FileHandle from BytesIO with offset.""" pad = b'abcd' value = b'123456789' buf = BytesIO() buf.write(pad) with FileHandle(buf) as fh: fh.write(value) buf.write(pad) # assert buffer buf.seek(len(pad)) assert buf.read(len(value)) == value buf.seek(2) with FileHandle(buf, offset=len(pad), size=len(value)) as fh: assert fh.read(len(value)) == value @pytest.mark.skipif(SKIP_PUBLIC, reason=REASON) def test_filehandle_filename(): """Test FileHandle from filename.""" with FileHandle(FILEHANDLE_NAME) as fh: assert fh.name == 'test_FileHandle.bin' assert fh.is_file assert_filehandle(fh) @pytest.mark.skipif(SKIP_PUBLIC, reason=REASON) def test_filehandle_filename_offset(): """Test FileHandle from filename with offset.""" with FileHandle( FILEHANDLE_NAME, offset=FILEHANDLE_OFFSET, size=FILEHANDLE_LENGTH ) as fh: assert fh.name == 'test_FileHandle.bin' assert fh.is_file assert_filehandle(fh, FILEHANDLE_OFFSET) @pytest.mark.skipif(SKIP_PUBLIC, reason=REASON) def test_filehandle_bytesio(): """Test FileHandle from BytesIO.""" with open(FILEHANDLE_NAME, 'rb') as fh: stream = BytesIO(fh.read()) with FileHandle(stream) as fh: assert fh.name == 'Unnamed binary stream' assert not fh.is_file assert_filehandle(fh) @pytest.mark.skipif(SKIP_PUBLIC, reason=REASON) def test_filehandle_bytesio_offset(): """Test FileHandle from BytesIO with offset.""" with open(FILEHANDLE_NAME, 'rb') as fh: stream = BytesIO(fh.read()) with FileHandle( stream, offset=FILEHANDLE_OFFSET, size=FILEHANDLE_LENGTH ) as fh: assert fh.name == 'Unnamed binary stream' assert not fh.is_file assert_filehandle(fh, offset=FILEHANDLE_OFFSET) @pytest.mark.skipif(SKIP_PUBLIC, reason=REASON) def test_filehandle_openfile(): """Test FileHandle from open file.""" with open(FILEHANDLE_NAME, 'rb') as fhandle: with FileHandle(fhandle) as fh: assert fh.name == 'test_FileHandle.bin' assert fh.is_file assert_filehandle(fh) assert not fhandle.closed @pytest.mark.skipif(SKIP_PUBLIC, reason=REASON) def test_filehandle_openfile_offset(): """Test FileHandle from open file with offset.""" with open(FILEHANDLE_NAME, 'rb') as fhandle: with FileHandle( fhandle, offset=FILEHANDLE_OFFSET, size=FILEHANDLE_LENGTH ) as fh: assert fh.name == 'test_FileHandle.bin' assert fh.is_file assert_filehandle(fh, offset=FILEHANDLE_OFFSET) assert not fhandle.closed @pytest.mark.skipif(SKIP_PUBLIC, reason=REASON) def test_filehandle_filehandle(): """Test FileHandle from other FileHandle.""" with FileHandle(FILEHANDLE_NAME, 'rb') as fhandle: with FileHandle(fhandle) as fh: assert fh.name == 'test_FileHandle.bin' assert fh.is_file assert_filehandle(fh) assert not fhandle.closed @pytest.mark.skipif(SKIP_PUBLIC, reason=REASON) def test_filehandle_offset(): """Test FileHandle from other FileHandle with offset.""" with FileHandle(FILEHANDLE_NAME, 'rb') as fhandle: with FileHandle( fhandle, offset=FILEHANDLE_OFFSET, size=FILEHANDLE_LENGTH ) as fh: assert fh.name == 'test_FileHandle@333.bin' assert fh.is_file assert_filehandle(fh, offset=FILEHANDLE_OFFSET) assert not fhandle.closed @pytest.mark.skipif(SKIP_PUBLIC, reason=REASON) def test_filehandle_reopen(): """Test FileHandle close and open.""" try: fh = FileHandle(FILEHANDLE_NAME) assert not fh.closed assert fh.is_file fh.close() assert fh.closed fh.open() assert not fh.closed assert fh.is_file assert fh.name == 'test_FileHandle.bin' assert_filehandle(fh) finally: fh.close() @pytest.mark.skipif(SKIP_HTTP or not IS_CG, reason=REASON) def test_filehandle_unc_path(): """Test FileHandle from UNC path.""" with FileHandle(r'\\localhost\test$\test_FileHandle.bin') as fh: assert fh.name == 'test_FileHandle.bin' assert fh.dirname == '\\\\localhost\\test$\\' assert_filehandle(fh) @pytest.mark.skipif(SKIP_PUBLIC or SKIP_ZARR, reason=REASON) def test_filehandle_fsspec_localfileopener(): """Test FileHandle from fsspec LocalFileOpener.""" with fsspec.open(FILEHANDLE_NAME, 'rb') as fhandle: with FileHandle(fhandle) as fh: assert fh.name == 'test_FileHandle.bin' assert fh.is_file # fails with fsspec 2022.7 assert_filehandle(fh) assert not fhandle.closed @pytest.mark.skipif(SKIP_PUBLIC or SKIP_ZARR, reason=REASON) def test_filehandle_fsspec_openfile(): """Test FileHandle from fsspec OpenFile.""" fhandle = fsspec.open(FILEHANDLE_NAME, 'rb') with FileHandle(fhandle) as fh: assert fh.name == 'test_FileHandle.bin' assert fh.is_file assert_filehandle(fh) fhandle.close() @pytest.mark.skipif(SKIP_PUBLIC or SKIP_HTTP or SKIP_ZARR, reason=REASON) def test_filehandle_fsspec_http(): """Test FileHandle from HTTP via fsspec.""" with fsspec.open(URL + 'test/test_FileHandle.bin', 'rb') as fhandle: with FileHandle(fhandle) as fh: assert fh.name == 'test_FileHandle.bin' assert not fh.is_file assert_filehandle(fh) assert not fhandle.closed def test_filehandle_exclusive_creation(): """Test FileHandle with exclusive creation mode 'x'.""" # https://github.com/cgohlke/tifffile/issues/221 with TempFileName('read_filehandle_exclusive', ext='.bin') as fname: if os.path.exists(fname): os.remove(fname) with FileHandle(fname, mode='x'): pass with pytest.raises(FileExistsError): with FileHandle(fname, mode='x'): pass ############################################################################### # Test read specific files if SKIP_EXTENDED or SKIP_PRIVATE: TIGER_FILES = [] TIGER_IDS = [] else: TIGER_FILES = ( public_files('graphicsmagick.org/be/*.tif') + public_files('graphicsmagick.org/le/*.tif') + public_files('graphicsmagick.org/bigtiff-be/*.tif') + public_files('graphicsmagick.org/bigtiff-le/*.tif') ) TIGER_IDS = [ '-'.join(f.split(os.path.sep)[-2:]) .replace('-tiger', '') .replace('.tif', '') for f in TIGER_FILES ] @pytest.mark.skipif(SKIP_PUBLIC or SKIP_CODECS or SKIP_EXTENDED, reason=REASON) @pytest.mark.parametrize('fname', TIGER_FILES, ids=TIGER_IDS) def test_read_tigers(fname): """Test tiger images from GraphicsMagick.""" # ftp://ftp.graphicsmagick.org/pub/tiff-samples with TiffFile(fname) as tif: byteorder = {'le': '<', 'be': '>'}[os.path.split(fname)[0][-2:]] databits = int(fname.rsplit('.tif')[0][-2:]) # assert file properties assert_file_flags(tif) assert tif.byteorder == byteorder assert tif.is_bigtiff == ('bigtiff' in fname) assert len(tif.pages) == 1 # assert page properties page = tif.pages.first assert_page_flags(page) assert page.tags['DocumentName'].value == os.path.basename(fname) assert page.imagewidth == 73 assert page.imagelength == 76 assert page.bitspersample == databits assert (page.photometric == PHOTOMETRIC.RGB) == ('rgb' in fname) assert (page.photometric == PHOTOMETRIC.PALETTE) == ( 'palette' in fname ) assert page.is_tiled == ('tile' in fname) assert (page.planarconfig == PLANARCONFIG.CONTIG) == ( 'planar' not in fname ) if 'minisblack' in fname: assert page.photometric == PHOTOMETRIC.MINISBLACK # float24 not supported # if 'float' in fname and databits == 24: # with pytest.raises(ValueError): # data = tif.asarray() # return # assert data shapes data = tif.asarray() assert isinstance(data, numpy.ndarray) assert data.flags['C_CONTIGUOUS'] # if 'palette' in fname: # shape = (76, 73, 3) if 'rgb' in fname: if 'planar' in fname: shape = (3, 76, 73) else: shape = (76, 73, 3) elif 'separated' in fname: if 'planar' in fname: shape = (4, 76, 73) else: shape = (76, 73, 4) else: shape = (76, 73) assert data.shape == shape # assert data types if 'float' in fname: if databits == 24: dtype = numpy.float32 else: dtype = f'float{databits}' # elif 'palette' in fname: # dtype = numpy.uint16 elif databits == 1: dtype = numpy.bool_ elif databits <= 8: dtype = numpy.uint8 elif databits <= 16: dtype = numpy.uint16 elif databits <= 32: dtype = numpy.uint32 elif databits <= 64: dtype = numpy.uint64 assert data.dtype == dtype assert_decode_method(page, data) assert_aszarr_method(page, data) assert__str__(tif) @pytest.mark.skipif(SKIP_PRIVATE, reason=REASON) def test_read_exif_paint(): """Test read EXIF tags.""" fname = private_file('exif/paint.tif') with TiffFile(fname) as tif: exif = tif.pages.first.tags['ExifTag'].value assert exif['ColorSpace'] == 65535 assert exif['ExifVersion'] == '0230' assert exif['UserComment'] == 'paint' assert tif.fstat.st_size == 4234366 assert_aszarr_method(tif) assert__str__(tif) @pytest.mark.skipif(SKIP_PUBLIC or SKIP_CODECS, reason=REASON) def test_read_hopper_2bit(): """Test read 2-bit, fillorder=lsb2msb.""" # https://github.com/python-pillow/Pillow/pull/1789 fname = public_file('pillow/tiff_gray_2_4_bpp/hopper2.tif') with TiffFile(fname) as tif: assert tif.byteorder == '<' assert len(tif.pages) == 1 assert len(tif.series) == 1 # assert page properties page = tif.pages.first assert page.photometric == PHOTOMETRIC.MINISBLACK assert not page.is_contiguous assert page.compression == COMPRESSION.NONE assert page.imagewidth == 128 assert page.imagelength == 128 assert page.bitspersample == 2 assert page.samplesperpixel == 1 # assert series properties series = tif.series[0] assert series.shape == (128, 128) assert series.dtype == numpy.uint8 assert series.axes == 'YX' assert series.kind == 'uniform' assert series.dataoffset is None # assert data data = tif.asarray() assert isinstance(data, numpy.ndarray) assert data.shape == (128, 128) assert data[50, 63] == 3 assert_aszarr_method(tif, data) assert__str__(tif) # reversed fname = public_file('pillow/tiff_gray_2_4_bpp/hopper2R.tif') with TiffFile(fname) as tif: page = tif.pages.first assert page.photometric == PHOTOMETRIC.MINISBLACK assert page.fillorder == FILLORDER.LSB2MSB assert_array_equal(tif.asarray(), data) assert_aszarr_method(tif) assert__str__(tif) # inverted fname = public_file('pillow/tiff_gray_2_4_bpp/hopper2I.tif') with TiffFile(fname) as tif: page = tif.pages.first assert page.photometric == PHOTOMETRIC.MINISWHITE assert_array_equal(tif.asarray(), 3 - data) assert_aszarr_method(tif) assert__str__(tif) # inverted and reversed fname = public_file('pillow/tiff_gray_2_4_bpp/hopper2IR.tif') with TiffFile(fname) as tif: page = tif.pages.first assert page.photometric == PHOTOMETRIC.MINISWHITE assert_array_equal(tif.asarray(), 3 - data) assert_aszarr_method(tif) assert__str__(tif) @pytest.mark.skipif(SKIP_PUBLIC or SKIP_CODECS, reason=REASON) def test_read_hopper_4bit(): """Test read 4-bit, fillorder=lsb2msb.""" # https://github.com/python-pillow/Pillow/pull/1789 fname = public_file('pillow/tiff_gray_2_4_bpp/hopper4.tif') with TiffFile(fname) as tif: assert tif.byteorder == '<' assert len(tif.pages) == 1 assert len(tif.series) == 1 # assert page properties page = tif.pages.first assert page.photometric == PHOTOMETRIC.MINISBLACK assert not page.is_contiguous assert page.compression == COMPRESSION.NONE assert page.imagewidth == 128 assert page.imagelength == 128 assert page.bitspersample == 4 assert page.samplesperpixel == 1 # assert series properties series = tif.series[0] assert series.shape == (128, 128) assert series.dtype == numpy.uint8 assert series.axes == 'YX' assert series.kind == 'uniform' assert series.dataoffset is None # assert data data = tif.asarray() assert isinstance(data, numpy.ndarray) assert data.shape == (128, 128) assert data[50, 63] == 13 # reversed fname = public_file('pillow/tiff_gray_2_4_bpp/hopper4R.tif') with TiffFile(fname) as tif: page = tif.pages.first assert page.photometric == PHOTOMETRIC.MINISBLACK assert page.fillorder == FILLORDER.LSB2MSB assert_array_equal(tif.asarray(), data) assert__str__(tif) # inverted fname = public_file('pillow/tiff_gray_2_4_bpp/hopper4I.tif') with TiffFile(fname) as tif: page = tif.pages.first assert page.photometric == PHOTOMETRIC.MINISWHITE assert_array_equal(tif.asarray(), 15 - data) assert__str__(tif) # inverted and reversed fname = public_file('pillow/tiff_gray_2_4_bpp/hopper4IR.tif') with TiffFile(fname) as tif: page = tif.pages.first assert page.photometric == PHOTOMETRIC.MINISWHITE assert_array_equal(tif.asarray(), 15 - data) assert__str__(tif) @pytest.mark.skipif(SKIP_PRIVATE, reason=REASON) def test_read_lsb2msb(): """Test read fillorder=lsb2msb, 2 series.""" # http://lists.openmicroscopy.org.uk/pipermail/ome-users # /2015-September/005635.html fname = private_file('test_lsb2msb.tif') with TiffFile(fname) as tif: assert tif.byteorder == '<' assert len(tif.pages) == 2 assert len(tif.series) == 2 # assert page properties page = tif.pages.first assert page.is_contiguous assert page.compression == COMPRESSION.NONE assert page.imagewidth == 7100 assert page.imagelength == 4700 assert page.bitspersample == 16 assert page.samplesperpixel == 3 page = tif.pages[1] assert page.is_contiguous assert page.compression == COMPRESSION.NONE assert page.imagewidth == 7100 assert page.imagelength == 4700 assert page.bitspersample == 16 assert page.samplesperpixel == 1 # assert series properties series = tif.series[0] assert series.shape == (4700, 7100, 3) assert series.dtype == numpy.uint16 assert series.axes == 'YXS' assert series.dataoffset is None series = tif.series[1] assert series.shape == (4700, 7100) assert series.dtype == numpy.uint16 assert series.axes == 'YX' assert series.kind == 'generic' assert series.dataoffset is None # assert data data = tif.asarray(series=0) assert isinstance(data, numpy.ndarray) assert data.shape == (4700, 7100, 3) assert data[2350, 3550, 1] == 60457 assert_aszarr_method(tif, data, series=0) data = tif.asarray(series=1) assert isinstance(data, numpy.ndarray) assert data.flags['C_CONTIGUOUS'] assert data.shape == (4700, 7100) assert data[2350, 3550] == 56341 assert_aszarr_method(tif, data, series=1) assert__str__(tif) @pytest.mark.skipif(SKIP_PUBLIC, reason=REASON) def test_read_gimp_u2(): """Test read uint16 with horizontal predictor by GIMP.""" fname = public_file('tifffile/gimp_u2.tiff') with TiffFile(fname) as tif: assert len(tif.pages) == 1 page = tif.pages.first assert page.compression == COMPRESSION.ADOBE_DEFLATE assert page.photometric == PHOTOMETRIC.RGB assert page.predictor == PREDICTOR.HORIZONTAL assert page.imagewidth == 333 assert page.imagelength == 231 assert page.samplesperpixel == 3 image = tif.asarray() assert image.flags['C_CONTIGUOUS'] assert tuple(image[110, 110]) == (23308, 17303, 41160) assert_aszarr_method(tif, image) assert__str__(tif) @pytest.mark.skipif(SKIP_PUBLIC, reason=REASON) def test_read_gimp_f4(): """Test read float32 with horizontal predictor by GIMP.""" fname = public_file('tifffile/gimp_f4.tiff') with TiffFile(fname) as tif: assert len(tif.pages) == 1 page = tif.pages.first assert page.compression == COMPRESSION.ADOBE_DEFLATE assert page.photometric == PHOTOMETRIC.RGB assert page.predictor == PREDICTOR.HORIZONTAL assert page.imagewidth == 333 assert page.imagelength == 231 assert page.samplesperpixel == 3 image = tif.asarray() assert image.flags['C_CONTIGUOUS'] assert_array_almost_equal( image[110, 110], (0.35565534, 0.26402164, 0.6280674) ) assert_aszarr_method(tif, image) assert__str__(tif) @pytest.mark.skipif(SKIP_PUBLIC, reason=REASON) def test_read_gimp_f2(): """Test read float16 with horizontal predictor by GIMP.""" fname = public_file('tifffile/gimp_f2.tiff') with TiffFile(fname) as tif: assert len(tif.pages) == 1 page = tif.pages.first assert page.compression == COMPRESSION.ADOBE_DEFLATE assert page.photometric == PHOTOMETRIC.RGB assert page.predictor == PREDICTOR.HORIZONTAL assert page.imagewidth == 333 assert page.imagelength == 231 assert page.samplesperpixel == 3 image = tif.asarray() assert image.flags['C_CONTIGUOUS'] assert_array_almost_equal( image[110, 110].astype(numpy.float64), (0.35571289, 0.26391602, 0.62792969), ) assert_aszarr_method(tif, image) assert__str__(tif) @pytest.mark.skipif( SKIP_PRIVATE or SKIP_CODECS or not imagecodecs.JPEG8.available, reason=REASON, ) def test_read_dng_jpeglossy(): """Test read JPEG_LOSSY in DNG.""" fname = private_file('DNG/Adobe DNG Converter.dng') with TiffFile(fname) as tif: assert len(tif.pages) == 1 assert len(tif.series) == 6 for series in tif.series: image = series.asarray() assert_aszarr_method(series, image) assert__str__(tif) @pytest.mark.skipif( SKIP_PRIVATE or SKIP_CODECS or not imagecodecs.LJPEG.available, reason=REASON, ) def test_read_dng_ljpeg(): """Test read 14-bit CFA LJPEG in DNG.""" fname = private_file('DNG/uint14_ljpeg_cfa.dng') with TiffFile(fname) as tif: assert len(tif.pages) == 1 assert len(tif.series) == 3 page = tif.pages.first.pages[0] assert page.compression == COMPRESSION.JPEG assert page.photometric == PHOTOMETRIC.CFA assert page.imagewidth == 7392 assert page.imagelength == 4950 assert page.bitspersample == 14 assert page.samplesperpixel == 1 assert page.tags['CFARepeatPatternDim'].value == (2, 2) assert page.tags['CFAPattern'].value == b'\0\1\1\2' assert page.tags['CFALayout'].value == 1 image = page.asarray() assert image.shape == (4950, 7392) assert image.dtype == numpy.uint16 assert image[1024, 1024] == 3425 assert__str__(tif) @pytest.mark.skipif( SKIP_PRIVATE or SKIP_CODECS or not imagecodecs.LJPEG.available, reason=REASON, ) def test_read_dng_linearraw(): """Test read 12-bit LinearRAW LJPEG in DNG.""" fname = private_file('DNG/LinearRaw.dng') with TiffFile(fname) as tif: assert len(tif.pages) == 1 assert len(tif.series) == 2 page = tif.pages.first.pages[0] assert page.compression == COMPRESSION.JPEG assert page.photometric == PHOTOMETRIC.LINEAR_RAW assert page.imagewidth == 4032 assert page.imagelength == 3024 assert page.bitspersample == 12 assert page.samplesperpixel == 3 assert page.tile == (378, 504) image = page.asarray() assert image.shape == (3024, 4032, 3) assert image.dtype == numpy.uint16 assert tuple(image[740, 3660]) == (1474, 3090, 1655) assert__str__(tif) @pytest.mark.skipif(SKIP_PRIVATE or SKIP_CODECS, reason=REASON) @pytest.mark.parametrize('fp', ['fp16', 'fp24', 'fp32']) def test_read_dng_floatpredx2(fp): """Test read FLOATINGPOINTX2 predictor in DNG.""" # fname = private_file(f'DNG/fpx2/hdrmerge-bayer-{fp}-w-pred-deflate.dng') with TiffFile(fname) as tif: assert len(tif.pages) == 1 assert len(tif.series) == 3 page = tif.pages.first.pages[0] assert page.compression == COMPRESSION.ADOBE_DEFLATE assert page.photometric == PHOTOMETRIC.CFA assert page.predictor == 34894 assert page.imagewidth == 5920 assert page.imagelength == 3950 assert page.sampleformat == SAMPLEFORMAT.IEEEFP assert page.bitspersample == int(fp[2:]) assert page.samplesperpixel == 1 if fp == 'fp24': with pytest.raises(NotImplementedError): image = page.asarray() else: image = page.asarray() assert_aszarr_method(page, image) assert__str__(tif) @pytest.mark.skipif( SKIP_PRIVATE or SKIP_CODECS or not imagecodecs.JPEGXL.available, reason=REASON, ) def test_read_dng_jpegxl(): """Test read JPEGXL in DNG.""" fname = private_file('DNG/20240125_204051_1.dng') with TiffFile(fname) as tif: assert len(tif.pages) == 1 assert len(tif.series) == 6 page = tif.pages.first.pages[0] assert not page.is_reduced assert page.compression == COMPRESSION.JPEGXL_DNG assert page.photometric == PHOTOMETRIC.LINEAR_RAW assert page.imagewidth == 5712 assert page.imagelength == 4284 assert page.bitspersample == 16 assert page.samplesperpixel == 3 image = page.asarray() assert image.shape == (4284, 5712, 3) assert image.dtype == numpy.uint16 assert image[1024, 1024, 1] == 36 page = tif.pages.first.pages[1] assert page.is_reduced assert page.compression == COMPRESSION.JPEGXL_DNG assert page.photometric == PHOTOMETRIC.RGB assert page.imagewidth == 1024 assert page.imagelength == 768 assert page.bitspersample == 8 assert page.samplesperpixel == 3 image = page.asarray() assert image.shape == (768, 1024, 3) assert image.dtype == numpy.uint8 assert image[512, 512, 1] in {47, 48} assert page.tags['JXLDistance'].value == 2.0 assert page.tags['JXLEffort'].value == 7 assert page.tags['JXLDecodeSpeed'].value == 4 assert__str__(tif) @pytest.mark.skipif(SKIP_PRIVATE, reason=REASON) @pytest.mark.parametrize('fname', ['sample1.orf', 'sample1.rw2']) def test_read_rawformats(fname, caplog): """Test parse unsupported RAW formats.""" fname = private_file(f'RAWformats/{fname}') with TiffFile(fname) as tif: assert 'RAW format' in caplog.text assert__str__(tif) @pytest.mark.skipif(SKIP_PRIVATE, reason=REASON) def test_read_iss_vista(): """Test read bogus imagedepth tag by ISS Vista.""" fname = private_file('iss/10um_beads_14stacks_ch1.tif') with TiffFile(fname) as tif: assert tif.byteorder == '<' assert len(tif.pages) == 14 assert len(tif.series) == 1 # assert page properties page = tif.pages.first assert not page.is_reduced assert not page.is_tiled assert page.compression == COMPRESSION.NONE assert page.imagewidth == 256 assert page.imagelength == 256 assert page.tags['ImageDepth'].value == 14 # bogus assert page.bitspersample == 16 assert page.samplesperpixel == 1 # assert series properties series = tif.series[0] assert series.shape == (14, 256, 256) assert series.dtype == numpy.int16 assert series.axes == 'IYX' # ZYX assert series.kind == 'uniform' assert isinstance(series.pages[3], TiffFrame) assert_aszarr_method(series) assert__str__(tif) @pytest.mark.skipif(SKIP_PRIVATE, reason=REASON) def test_read_vips(): """Test read 347x641 RGB, bigtiff, pyramid, tiled, produced by VIPS.""" fname = private_file('vips.tif') with TiffFile(fname) as tif: assert tif.byteorder == '<' assert len(tif.pages) == 4 assert len(tif.series) == 1 # assert page properties page = tif.pages.first assert not page.is_reduced assert page.is_tiled assert page.compression == COMPRESSION.ADOBE_DEFLATE assert page.imagewidth == 641 assert page.imagelength == 347 assert page.bitspersample == 8 assert page.samplesperpixel == 3 # assert series properties series = tif.series[0] assert series.is_pyramidal assert len(series.levels) == 4 assert series.shape == (347, 641, 3) assert series.dtype == numpy.uint8 assert series.axes == 'YXS' assert series.kind == 'generic' # level 3 series = series.levels[3] page = series.pages[0] assert page.is_reduced assert page.is_tiled assert series.shape == (43, 80, 3) assert series.dtype == numpy.uint8 assert series.axes == 'YXS' # assert data data = tif.asarray(series=0) assert isinstance(data, numpy.ndarray) assert data.flags['C_CONTIGUOUS'] assert data.shape == (347, 641, 3) assert data.dtype == numpy.uint8 assert tuple(data[132, 361]) == (114, 233, 58) assert_aszarr_method(tif, data, series=0, level=0) assert__str__(tif) @pytest.mark.skipif(SKIP_PUBLIC, reason=REASON) def test_read_volumetric(): """Test read 128x128x128, float32, tiled SGI.""" fname = public_file('tifffile/sgi_depth.tif') with TiffFile(fname) as tif: assert tif.byteorder == '<' assert len(tif.pages) == 1 assert len(tif.series) == 1 # assert page properties page = tif.pages.first assert page.is_volumetric assert page.planarconfig == PLANARCONFIG.CONTIG assert page.is_tiled assert page.is_contiguous assert page.compression == COMPRESSION.NONE assert page.imagewidth == 128 assert page.imagelength == 128 assert page.imagedepth == 128 assert page.tilewidth == 128 assert page.tilelength == 128 assert page.tiledepth == 1 assert page.tile == (128, 128) assert page.bitspersample == 32 assert page.samplesperpixel == 1 assert page.tags['Software'].value == ( 'MFL MeVis File Format Library, TIFF Module' ) # assert series properties series = tif.series[0] assert series.shape == (128, 128, 128) assert series.dtype == numpy.float32 assert series.axes == 'ZYX' assert series.kind == 'uniform' # assert data data = tif.asarray() assert isinstance(data, numpy.ndarray) assert data.flags['C_CONTIGUOUS'] assert data.shape == (128, 128, 128) assert data.dtype == numpy.float32 assert data[64, 64, 64] == 0.0 assert_decode_method(page) assert_aszarr_method(tif, data) assert__str__(tif) @pytest.mark.skipif(SKIP_PUBLIC or SKIP_CODECS, reason=REASON) def test_read_oxford(): """Test read 601x81, uint8, LZW.""" fname = public_file('juicypixels/oxford.tif') with TiffFile(fname) as tif: assert tif.byteorder == '>' assert len(tif.pages) == 1 assert len(tif.series) == 1 # assert page properties page = tif.pages.first assert page.planarconfig == PLANARCONFIG.SEPARATE assert page.photometric == PHOTOMETRIC.RGB assert page.compression == COMPRESSION.LZW assert page.imagewidth == 601 assert page.imagelength == 81 assert page.bitspersample == 8 assert page.samplesperpixel == 3 # assert series properties series = tif.series[0] assert series.shape == (3, 81, 601) assert series.dtype == numpy.uint8 assert series.axes == 'SYX' assert series.kind == 'uniform' # assert data data = tif.asarray() assert isinstance(data, numpy.ndarray) assert data.flags['C_CONTIGUOUS'] assert data.shape == (3, 81, 601) assert data.dtype == numpy.uint8 assert data[1, 24, 49] == 191 assert_aszarr_method(tif, data) assert__str__(tif) @pytest.mark.skipif(SKIP_PUBLIC, reason=REASON) def test_read_cramps(): """Test 800x607 uint8, PackBits.""" fname = public_file('juicypixels/cramps.tif') with TiffFile(fname) as tif: assert tif.byteorder == '>' assert len(tif.pages) == 1 assert len(tif.series) == 1 # assert page properties page = tif.pages.first assert page.compression == COMPRESSION.PACKBITS assert page.photometric == PHOTOMETRIC.MINISWHITE assert page.imagewidth == 800 assert page.imagelength == 607 assert page.bitspersample == 8 assert page.samplesperpixel == 1 # assert series properties series = tif.series[0] assert series.shape == (607, 800) assert series.dtype == numpy.uint8 assert series.axes == 'YX' assert series.kind == 'uniform' # assert data data = tif.asarray() assert isinstance(data, numpy.ndarray) assert data.flags['C_CONTIGUOUS'] assert data.shape == (607, 800) assert data.dtype == numpy.uint8 assert data[273, 426] == 34 assert_aszarr_method(tif, data) assert__str__(tif) @pytest.mark.skipif(SKIP_PUBLIC, reason=REASON) def test_read_cramps_tile(): """Test read 800x607 uint8, raw, volumetric, tiled.""" fname = public_file('juicypixels/cramps-tile.tif') with TiffFile(fname) as tif: assert tif.byteorder == '>' assert len(tif.pages) == 1 assert len(tif.series) == 1 # assert page properties page = tif.pages.first assert page.is_tiled assert not page.is_volumetric assert page.compression == COMPRESSION.NONE assert page.photometric == PHOTOMETRIC.MINISWHITE assert page.imagewidth == 800 assert page.imagelength == 607 assert page.imagedepth == 1 assert page.tilewidth == 256 assert page.tilelength == 256 assert page.tiledepth == 1 assert page.bitspersample == 8 assert page.samplesperpixel == 1 # assert series properties series = tif.series[0] assert series.shape == (607, 800) assert series.dtype == numpy.uint8 assert series.axes == 'YX' assert series.kind == 'uniform' # assert data data = tif.asarray() assert isinstance(data, numpy.ndarray) assert data.flags['C_CONTIGUOUS'] assert data.shape == (607, 800) assert data.dtype == numpy.uint8 assert data[273, 426] == 34 assert_aszarr_method(tif, data) assert__str__(tif) @pytest.mark.skipif(SKIP_PUBLIC, reason=REASON) def test_read_jello(): """Test read 256x192x3, uint16, palette, PackBits.""" fname = public_file('juicypixels/jello.tif') with TiffFile(fname) as tif: assert tif.byteorder == '>' assert len(tif.pages) == 1 assert len(tif.series) == 1 # assert page properties page = tif.pages.first assert page.photometric == PHOTOMETRIC.PALETTE assert page.planarconfig == PLANARCONFIG.CONTIG assert page.compression == COMPRESSION.PACKBITS assert page.imagewidth == 256 assert page.imagelength == 192 assert page.bitspersample == 8 assert page.samplesperpixel == 1 # assert series properties series = tif.series[0] assert series.shape == (192, 256) assert series.dtype == numpy.uint8 assert series.axes == 'YX' assert series.kind == 'uniform' # assert data data = page.asrgb(uint8=False) assert isinstance(data, numpy.ndarray) assert data.flags['C_CONTIGUOUS'] assert data.shape == (192, 256, 3) assert data.dtype == numpy.uint16 assert tuple(data[100, 140, :]) == (48895, 65279, 48895) assert__str__(tif) @pytest.mark.skipif(SKIP_PUBLIC or SKIP_CODECS, reason=REASON) def test_read_quad_lzw(): """Test read 384x512 RGB uint8 old style LZW.""" fname = public_file('libtiff/quad-lzw-compat.tiff') with TiffFile(fname) as tif: assert tif.byteorder == '>' assert len(tif.pages) == 1 assert len(tif.series) == 1 # assert page properties page = tif.pages.first assert not page.is_tiled assert page.photometric == PHOTOMETRIC.RGB assert page.compression == COMPRESSION.LZW assert page.imagewidth == 512 assert page.imagelength == 384 assert page.bitspersample == 8 assert page.samplesperpixel == 3 # assert series properties series = tif.series[0] assert series.shape == (384, 512, 3) assert series.dtype == numpy.uint8 assert series.axes == 'YXS' assert series.kind == 'uniform' # assert data data = tif.asarray() assert isinstance(data, numpy.ndarray) assert data.flags['C_CONTIGUOUS'] assert data.shape == (384, 512, 3) assert data.dtype == numpy.uint8 assert tuple(data[309, 460, :]) == (0, 163, 187) assert_aszarr_method(tif, data) @pytest.mark.skipif(SKIP_PRIVATE or SKIP_CODECS, reason=REASON) def test_read_quad_lzw_le(): """Test read 384x512 RGB uint8 LZW.""" fname = private_file('quad-lzw_le.tif') with TiffFile(fname) as tif: assert tif.byteorder == '<' assert len(tif.pages) == 1 assert len(tif.series) == 1 # assert page properties page = tif.pages.first assert page.photometric == PHOTOMETRIC.RGB assert not page.is_tiled assert page.compression == COMPRESSION.LZW assert page.imagewidth == 512 assert page.imagelength == 384 assert page.bitspersample == 8 assert page.samplesperpixel == 3 # assert series properties series = tif.series[0] assert series.shape == (384, 512, 3) assert series.dtype == numpy.uint8 assert series.axes == 'YXS' assert series.kind == 'uniform' # assert data data = tif.asarray() assert isinstance(data, numpy.ndarray) assert data.flags['C_CONTIGUOUS'] assert data.shape == (384, 512, 3) assert data.dtype == numpy.uint8 assert tuple(data[309, 460, :]) == (0, 163, 187) assert_aszarr_method(tif, data) assert_decode_method(page) @pytest.mark.skipif(SKIP_PUBLIC or SKIP_CODECS, reason=REASON) def test_read_quad_tile(): """Test read 384x512 RGB uint8 LZW tiled.""" # Strips and tiles defined in same page fname = public_file('juicypixels/quad-tile.tif') with TiffFile(fname) as tif: assert__str__(tif) assert tif.byteorder == '>' assert len(tif.pages) == 1 assert len(tif.series) == 1 # assert page properties page = tif.pages.first assert page.photometric == PHOTOMETRIC.RGB assert page.is_tiled assert page.compression == COMPRESSION.LZW assert page.imagewidth == 512 assert page.imagelength == 384 assert page.imagedepth == 1 assert page.tilewidth == 128 assert page.tilelength == 128 assert page.tiledepth == 1 assert page.bitspersample == 8 assert page.samplesperpixel == 3 # assert series properties series = tif.series[0] assert series.shape == (384, 512, 3) assert series.dtype == numpy.uint8 assert series.axes == 'YXS' assert series.kind == 'uniform' # assert data data = tif.asarray() # assert 'invalid tile data (49153,) (1, 128, 128, 3)' in caplog.text assert isinstance(data, numpy.ndarray) assert data.flags['C_CONTIGUOUS'] assert data.shape == (384, 512, 3) assert data.dtype == numpy.uint8 assert tuple(data[309, 460, :]) == (0, 163, 187) assert_aszarr_method(tif, data) @pytest.mark.skipif(SKIP_PUBLIC or SKIP_CODECS, reason=REASON) def test_read_strike(): """Test read 256x200 RGBA uint8 LZW.""" fname = public_file('juicypixels/strike.tif') with TiffFile(fname) as tif: assert__str__(tif) assert tif.byteorder == '>' assert len(tif.pages) == 1 assert len(tif.series) == 1 # assert page properties page = tif.pages.first assert page.photometric == PHOTOMETRIC.RGB assert page.compression == COMPRESSION.LZW assert page.imagewidth == 256 assert page.imagelength == 200 assert page.bitspersample == 8 assert page.samplesperpixel == 4 assert page.extrasamples[0] == EXTRASAMPLE.ASSOCALPHA # assert series properties series = tif.series[0] assert series.shape == (200, 256, 4) assert series.dtype == numpy.uint8 assert series.axes == 'YXS' assert series.kind == 'uniform' # assert data data = tif.asarray() assert isinstance(data, numpy.ndarray) assert data.flags['C_CONTIGUOUS'] assert data.shape == (200, 256, 4) assert data.dtype == numpy.uint8 assert tuple(data[65, 139, :]) == (43, 34, 17, 91) assert_aszarr_method(tif, data) assert_decode_method(page) assert__str__(tif) @pytest.mark.skipif(SKIP_PUBLIC, reason=REASON) def test_read_incomplete_tile_contig(): """Test read PackBits compressed incomplete tile, contig RGB.""" fname = public_file('GDAL/contig_tiled.tif') with TiffFile(fname) as tif: assert tif.byteorder == '>' assert len(tif.pages) == 1 assert len(tif.series) == 1 # assert page properties page = tif.pages.first assert page.photometric == PHOTOMETRIC.RGB assert page.planarconfig == PLANARCONFIG.CONTIG assert page.compression == COMPRESSION.PACKBITS assert page.imagewidth == 35 assert page.imagelength == 37 assert page.bitspersample == 8 assert page.samplesperpixel == 3 # assert series properties series = tif.series[0] assert series.shape == (37, 35, 3) assert series.dtype == numpy.uint8 assert series.axes == 'YXS' assert series.kind == 'uniform' # assert data data = page.asarray() assert data.flags['C_CONTIGUOUS'] assert data.shape == (37, 35, 3) assert data.dtype == numpy.uint8 assert tuple(data[19, 31]) == (50, 50, 50) assert tuple(data[36, 34]) == (70, 70, 70) assert_aszarr_method(page, data) assert_decode_method(page) assert__str__(tif) @pytest.mark.skipif(SKIP_PUBLIC, reason=REASON) def test_read_incomplete_tile_separate(): """Test read PackBits compressed incomplete tile, separate RGB.""" fname = public_file('GDAL/separate_tiled.tif') with TiffFile(fname) as tif: assert tif.byteorder == '>' assert len(tif.pages) == 1 assert len(tif.series) == 1 # assert page properties page = tif.pages.first assert page.photometric == PHOTOMETRIC.RGB assert page.planarconfig == PLANARCONFIG.SEPARATE assert page.compression == COMPRESSION.PACKBITS assert page.imagewidth == 35 assert page.imagelength == 37 assert page.bitspersample == 8 assert page.samplesperpixel == 3 # assert series properties series = tif.series[0] assert series.shape == (3, 37, 35) assert series.dtype == numpy.uint8 assert series.axes == 'SYX' assert series.kind == 'uniform' # assert data data = page.asarray() assert data.flags['C_CONTIGUOUS'] assert data.shape == (3, 37, 35) assert data.dtype == numpy.uint8 assert tuple(data[:, 19, 31]) == (50, 50, 50) assert tuple(data[:, 36, 34]) == (70, 70, 70) assert_aszarr_method(page, data) assert_decode_method(page) assert__str__(tif) @pytest.mark.skipif(SKIP_PRIVATE, reason=REASON) def test_read_django(): """Test read 3x480x320, uint16, palette, raw.""" fname = private_file('django.tiff') with TiffFile(fname) as tif: assert tif.byteorder == '<' assert len(tif.pages) == 1 assert len(tif.series) == 1 # assert page properties page = tif.pages.first assert page.photometric == PHOTOMETRIC.PALETTE assert page.planarconfig == PLANARCONFIG.CONTIG assert page.compression == COMPRESSION.NONE assert page.imagewidth == 320 assert page.imagelength == 480 assert page.bitspersample == 8 assert page.samplesperpixel == 1 # assert series properties series = tif.series[0] assert series.shape == (480, 320) assert series.dtype == numpy.uint8 assert series.axes == 'YX' assert series.kind == 'uniform' # assert data data = page.asrgb(uint8=False) assert isinstance(data, numpy.ndarray) assert data.flags['C_CONTIGUOUS'] assert data.shape == (480, 320, 3) assert data.dtype == numpy.uint16 assert tuple(data[64, 64, :]) == (65535, 52171, 63222) assert__str__(tif) @pytest.mark.skipif(SKIP_PRIVATE, reason=REASON) def test_read_pygame_icon(): """Test read 128x128 RGBA uint8 PackBits.""" fname = private_file('pygame_icon.tiff') with TiffFile(fname) as tif: assert tif.byteorder == '>' assert len(tif.pages) == 1 assert len(tif.series) == 1 # assert page properties page = tif.pages.first assert page.photometric == PHOTOMETRIC.RGB assert page.compression == COMPRESSION.PACKBITS assert page.imagewidth == 128 assert page.imagelength == 128 assert page.bitspersample == 8 assert page.samplesperpixel == 4 assert page.extrasamples[0] == EXTRASAMPLE.UNASSALPHA # ? assert page.tags['Software'].value == 'QuickTime 5.0.5' assert page.tags['HostComputer'].value == 'MacOS 10.1.2' assert page.tags['DateTime'].value == '2001:12:21 04:34:56' assert page.datetime == datetime.datetime(2001, 12, 21, 4, 34, 56) # assert series properties series = tif.series[0] assert series.shape == (128, 128, 4) assert series.dtype == numpy.uint8 assert series.axes == 'YXS' assert series.kind == 'uniform' # assert data data = tif.asarray() assert isinstance(data, numpy.ndarray) assert data.flags['C_CONTIGUOUS'] assert data.shape == (128, 128, 4) assert data.dtype == numpy.uint8 assert tuple(data[22, 112, :]) == (100, 99, 98, 132) assert_aszarr_method(tif, data) assert__str__(tif) @pytest.mark.skipif(SKIP_PRIVATE or SKIP_CODECS, reason=REASON) def test_read_rgba_wo_extra_samples(): """Test read 1065x785 RGBA uint8.""" fname = private_file('rgba_wo_extra_samples.tif') with TiffFile(fname) as tif: assert tif.byteorder == '<' assert len(tif.pages) == 1 assert len(tif.series) == 1 # assert page properties page = tif.pages.first assert page.photometric == PHOTOMETRIC.RGB assert page.compression == COMPRESSION.LZW assert page.imagewidth == 1065 assert page.imagelength == 785 assert page.bitspersample == 8 assert page.samplesperpixel == 4 # with self.assertRaises(AttributeError): # page.extrasamples # assert series properties series = tif.series[0] assert series.shape == (785, 1065, 4) assert series.dtype == numpy.uint8 assert series.axes == 'YXS' assert series.kind == 'uniform' # assert data data = tif.asarray() assert isinstance(data, numpy.ndarray) assert data.flags['C_CONTIGUOUS'] assert data.shape == (785, 1065, 4) assert data.dtype == numpy.uint8 assert tuple(data[560, 412, :]) == (60, 92, 74, 255) assert_aszarr_method(tif, data) assert_decode_method(page) assert__str__(tif) @pytest.mark.skipif(SKIP_PRIVATE, reason=REASON) def test_read_rgb565(): """Test read 64x64 RGB uint8 5,6,5 bitspersample.""" fname = private_file('rgb565.tif') with TiffFile(fname) as tif: assert tif.byteorder == '<' assert len(tif.pages) == 1 assert len(tif.series) == 1 # assert page properties page = tif.pages.first assert page.photometric == PHOTOMETRIC.RGB assert page.compression == COMPRESSION.NONE assert page.imagewidth == 64 assert page.imagelength == 64 assert page.bitspersample == (5, 6, 5) assert page.samplesperpixel == 3 # assert series properties series = tif.series[0] assert series.shape == (64, 64, 3) assert series.dtype == numpy.uint8 assert series.axes == 'YXS' assert series.kind == 'uniform' # assert data data = tif.asarray() assert isinstance(data, numpy.ndarray) assert data.flags['C_CONTIGUOUS'] assert data.shape == (64, 64, 3) assert data.dtype == numpy.uint8 assert tuple(data[56, 32, :]) == (239, 243, 247) assert_aszarr_method(tif, data) assert_decode_method(page) assert__str__(tif) @pytest.mark.skipif(SKIP_PUBLIC or SKIP_CODECS, reason=REASON) def test_read_generic_series(): """Test read 4 series in 6 pages.""" fname = public_file('tifffile/generic_series.tif') with TiffFile(fname) as tif: assert tif.byteorder == '<' assert len(tif.pages) == 6 assert len(tif.series) == 4 # assert series 0 properties series = tif.series[0] assert series.shape == (3, 20, 20) assert series.dtype == numpy.uint8 assert series.axes == 'IYX' assert series.kind == 'generic' page = series.pages[0] assert page.compression == COMPRESSION.LZW assert page.imagewidth == 20 assert page.imagelength == 20 assert page.bitspersample == 8 assert page.samplesperpixel == 1 data = tif.asarray(series=0) assert data.flags['C_CONTIGUOUS'] assert data.shape == (3, 20, 20) assert data.dtype == numpy.uint8 assert tuple(data[:, 9, 9]) == (19, 90, 206) assert_aszarr_method(tif, data, series=0) # assert series 1 properties series = tif.series[1] assert series.shape == (10, 10, 3) assert series.dtype == numpy.float32 assert series.axes == 'YXS' assert series.kind == 'generic' page = series.pages[0] assert page.photometric == PHOTOMETRIC.RGB assert page.compression == COMPRESSION.LZW assert page.imagewidth == 10 assert page.imagelength == 10 assert page.bitspersample == 32 assert page.samplesperpixel == 3 data = tif.asarray(series=1) assert isinstance(data, numpy.ndarray) assert data.flags['C_CONTIGUOUS'] assert data.shape == (10, 10, 3) assert data.dtype == numpy.float32 assert round(abs(data[9, 9, 1] - 214.5733642578125), 7) == 0 assert_aszarr_method(tif, data, series=1) # assert series 2 properties series = tif.series[2] assert series.shape == (20, 20, 3) assert series.dtype == numpy.uint8 assert series.axes == 'YXS' assert series.kind == 'generic' page = series.pages[0] assert page.photometric == PHOTOMETRIC.RGB assert page.compression == COMPRESSION.LZW assert page.imagewidth == 20 assert page.imagelength == 20 assert page.bitspersample == 8 assert page.samplesperpixel == 3 data = tif.asarray(series=2) assert isinstance(data, numpy.ndarray) assert data.flags['C_CONTIGUOUS'] assert data.shape == (20, 20, 3) assert data.dtype == numpy.uint8 assert tuple(data[9, 9, :]) == (19, 90, 206) assert_aszarr_method(tif, data, series=2) # assert series 3 properties series = tif.series[3] assert series.shape == (10, 10) assert series.dtype == numpy.float32 assert series.axes == 'YX' assert series.kind == 'generic' page = series.pages[0] assert page.compression == COMPRESSION.LZW assert page.imagewidth == 10 assert page.imagelength == 10 assert page.bitspersample == 32 assert page.samplesperpixel == 1 data = tif.asarray(series=3) assert isinstance(data, numpy.ndarray) assert data.flags['C_CONTIGUOUS'] assert data.shape == (10, 10) assert data.dtype == numpy.float32 assert round(abs(data[9, 9] - 223.1648712158203), 7) == 0 assert_aszarr_method(tif, data, series=3) assert__str__(tif) @pytest.mark.skipif(SKIP_PRIVATE or SKIP_CODECS, reason=REASON) def test_read_freeimage(): """Test read 3 series in 3 pages RGB LZW.""" fname = private_file('freeimage.tif') with TiffFile(fname) as tif: assert tif.byteorder == '<' assert len(tif.pages) == 3 assert len(tif.series) == 3 for i, shape in enumerate(((100, 600), (379, 574), (689, 636))): series = tif.series[i] shape = shape + (3,) assert series.shape == shape assert series.dtype == numpy.uint8 assert series.axes == 'YXS' assert series.kind == 'generic' page = series.pages[0] assert page.photometric == PHOTOMETRIC.RGB assert page.compression == COMPRESSION.LZW assert page.imagewidth == shape[1] assert page.imagelength == shape[0] assert page.bitspersample == 8 assert page.samplesperpixel == 3 data = tif.asarray(series=i) assert isinstance(data, numpy.ndarray) assert data.flags['C_CONTIGUOUS'] assert data.shape == shape assert data.dtype == numpy.uint8 assert_aszarr_method(tif, data, series=i) assert__str__(tif) @pytest.mark.skipif(SKIP_PRIVATE or SKIP_CODECS, reason=REASON) def test_read_leadtools(): """Test read LeadTools 11-pages with different compression.""" # https://www.leadtools.com/support/forum/posts/t10960- fname = private_file('LeadTools/MultipleFormats.tif') with TiffFile(fname) as tif: assert tif.byteorder == '<' assert len(tif.pages) == 11 assert len(tif.series) == 11 assert__str__(tif) # 1- Uncompressed bilevel page = tif.pages.first assert page.photometric == PHOTOMETRIC.MINISBLACK assert page.compression == COMPRESSION.NONE assert page.imagewidth == 600 assert page.imagelength == 75 assert page.bitspersample == 1 assert page.samplesperpixel == 1 data = page.asarray() assert data.shape == (75, 600) assert data.dtype == numpy.bool_ # 2- Uncompressed CMYK page = tif.pages[1] assert page.photometric == PHOTOMETRIC.SEPARATED assert page.compression == COMPRESSION.NONE assert page.imagewidth == 600 assert page.imagelength == 75 assert page.bitspersample == 8 assert page.samplesperpixel == 4 data = page.asarray() assert data.shape == (75, 600, 4) assert data.dtype == numpy.uint8 # 3- JPEG 2000 compression page = tif.pages[2] assert page.photometric == PHOTOMETRIC.RGB assert page.compression == COMPRESSION.JPEG2000 assert page.imagewidth == 600 assert page.imagelength == 75 assert page.bitspersample == 8 assert page.samplesperpixel == 3 data = page.asarray() assert data.shape == (75, 600, 3) assert data.dtype == numpy.uint8 # 4- JPEG 4:1:1 compression page = tif.pages[3] assert page.photometric == PHOTOMETRIC.YCBCR assert page.compression == COMPRESSION.JPEG assert page.imagewidth == 600 assert page.imagelength == 75 assert page.bitspersample == 8 assert page.samplesperpixel == 3 assert page.tags['YCbCrSubSampling'].value == (2, 2) data = page.asarray() assert data.shape == (75, 600, 3) assert data.dtype == numpy.uint8 # 5- JBIG compression page = tif.pages[4] assert page.photometric == PHOTOMETRIC.MINISBLACK assert page.compression == COMPRESSION.JBIG assert page.imagewidth == 600 assert page.imagelength == 75 assert page.bitspersample == 1 assert page.samplesperpixel == 1 with pytest.raises(ValueError): data = page.asarray() # 6- RLE Packbits compression page = tif.pages[5] assert page.photometric == PHOTOMETRIC.RGB assert page.compression == COMPRESSION.PACKBITS assert page.imagewidth == 600 assert page.imagelength == 75 assert page.bitspersample == 8 assert page.samplesperpixel == 3 data = page.asarray() assert data.shape == (75, 600, 3) assert data.dtype == numpy.uint8 # 7- CMYK with RLE Packbits compression page = tif.pages[6] assert page.photometric == PHOTOMETRIC.SEPARATED assert page.compression == COMPRESSION.PACKBITS assert page.imagewidth == 600 assert page.imagelength == 75 assert page.bitspersample == 8 assert page.samplesperpixel == 4 data = page.asarray() assert data.shape == (75, 600, 4) assert data.dtype == numpy.uint8 # 8- YCC with RLE Packbits compression page = tif.pages[7] assert page.photometric == PHOTOMETRIC.YCBCR assert page.compression == COMPRESSION.PACKBITS assert page.imagewidth == 600 assert page.imagelength == 75 assert page.bitspersample == 8 assert page.samplesperpixel == 3 assert page.tags['YCbCrSubSampling'].value == (2, 1) with pytest.raises(NotImplementedError): data = page.asarray() # 9- LZW compression page = tif.pages[8] assert page.photometric == PHOTOMETRIC.MINISBLACK assert page.compression == COMPRESSION.LZW assert page.imagewidth == 600 assert page.imagelength == 75 assert page.bitspersample == 1 assert page.samplesperpixel == 1 data = page.asarray() assert data.shape == (75, 600) assert data.dtype == numpy.bool_ # 10- CCITT Group 4 compression page = tif.pages[9] assert page.photometric == PHOTOMETRIC.MINISWHITE assert page.compression == COMPRESSION.CCITT_T6 assert page.imagewidth == 600 assert page.imagelength == 75 assert page.bitspersample == 1 assert page.samplesperpixel == 1 with pytest.raises(ValueError): data = page.asarray() # 11- CCITT Group 3 2-D compression page = tif.pages[10] assert page.photometric == PHOTOMETRIC.MINISWHITE assert page.compression == COMPRESSION.CCITT_T4 assert page.imagewidth == 600 assert page.imagelength == 75 assert page.bitspersample == 1 assert page.samplesperpixel == 1 with pytest.raises(ValueError): data = page.asarray() @pytest.mark.skipif(SKIP_PRIVATE or SKIP_CODECS, reason=REASON) def test_read_12bit(): """Test read 12 bit images.""" fname = private_file('12bit.tif') with TiffFile(fname) as tif: assert tif.byteorder == '<' assert len(tif.pages) == 1000 assert len(tif.series) == 1 # assert page properties page = tif.pages.first assert not page.is_contiguous assert page.compression == COMPRESSION.NONE assert page.imagewidth == 1024 assert page.imagelength == 304 assert page.bitspersample == 12 assert page.samplesperpixel == 1 # assert series properties series = tif.series[0] assert series.shape == (1000, 304, 1024) assert series.dtype == numpy.uint16 assert series.axes == 'IYX' assert series.kind == 'uniform' # assert data data = tif.asarray(478) assert isinstance(data, numpy.ndarray) assert data.flags['C_CONTIGUOUS'] assert data.shape == (304, 1024) assert data.dtype == numpy.uint16 assert round(abs(data[138, 475] - 40), 7) == 0 assert_aszarr_method(tif, data, key=478) assert__str__(tif, 0) @pytest.mark.skipif(SKIP_PUBLIC or SKIP_CODECS, reason=REASON) def test_read_lzw_12bit_table(): """Test read lzw-full-12-bit-table.tif. Also test RowsPerStrip > ImageLength. """ fname = public_file('twelvemonkeys/tiff/lzw-full-12-bit-table.tif') with TiffFile(fname) as tif: assert len(tif.series) == 1 assert len(tif.pages) == 1 page = tif.pages.first assert page.photometric == PHOTOMETRIC.MINISBLACK assert page.imagewidth == 874 assert page.imagelength == 1240 assert page.bitspersample == 8 assert page.samplesperpixel == 1 assert page.rowsperstrip == 1240 assert page.tags['RowsPerStrip'].value == 4294967295 # assert data image = page.asarray() assert image.flags['C_CONTIGUOUS'] assert image[434, 588] == 88 assert image[400, 600] == 255 assert_aszarr_method(page, image) assert__str__(tif) @pytest.mark.skipif(SKIP_PRIVATE or SKIP_CODECS or SKIP_LARGE, reason=REASON) def test_read_lzw_large_buffer(): """Test read LZW compression which requires large buffer.""" # https://github.com/groupdocs-viewer/GroupDocs.Viewer-for-.NET-MVC-App # /issues/35 fname = private_file('lzw/lzw_large_buffer.tiff') with TiffFile(fname) as tif: assert len(tif.pages) == 1 page = tif.pages.first assert page.compression == COMPRESSION.LZW assert page.imagewidth == 5104 assert page.imagelength == 8400 assert page.bitspersample == 8 assert page.samplesperpixel == 4 # assert data image = page.asarray() assert image.shape == (8400, 5104, 4) assert image.dtype == numpy.uint8 image = tif.asarray() assert image.shape == (8400, 5104, 4) assert image.dtype == numpy.uint8 assert image[4200, 2550, 0] == 0 assert image[4200, 2550, 3] == 255 assert_aszarr_method(tif, image) assert__str__(tif) @pytest.mark.skipif(SKIP_PRIVATE or SKIP_CODECS, reason=REASON) def test_read_lzw_ycbcr_subsampling(): """Test fail LZW compression with subsampling.""" fname = private_file('lzw/lzw_ycbcr_subsampling.tif') with TiffFile(fname) as tif: assert len(tif.pages) == 1 page = tif.pages.first assert page.compression == COMPRESSION.LZW assert page.photometric == PHOTOMETRIC.YCBCR assert page.planarconfig == PLANARCONFIG.CONTIG assert page.imagewidth == 39 assert page.imagelength == 39 assert page.bitspersample == 8 assert page.samplesperpixel == 3 # assert data with pytest.raises(NotImplementedError): page.asarray() assert__str__(tif) @pytest.mark.skipif(SKIP_PRIVATE, reason=REASON) def test_read_ycbcr_subsampling(): """Test fail YCBCR with subsampling.""" fname = private_file('ycbcr_subsampling.tif') with TiffFile(fname) as tif: assert len(tif.pages) == 2 page = tif.pages.first assert page.compression == COMPRESSION.NONE assert page.photometric == PHOTOMETRIC.YCBCR assert page.planarconfig == PLANARCONFIG.CONTIG assert page.imagewidth == 640 assert page.imagelength == 480 assert page.bitspersample == 8 assert page.samplesperpixel == 3 # assert data with pytest.raises(NotImplementedError): page.asarray() assert__str__(tif) @pytest.mark.skipif( SKIP_PRIVATE or SKIP_CODECS or not imagecodecs.JPEG.available, reason=REASON, ) def test_read_jpeg_baboon(): """Test JPEG compression.""" fname = private_file('baboon.tiff') with TiffFile(fname) as tif: assert tif.byteorder == '<' assert len(tif.pages) == 1 assert len(tif.series) == 1 # assert page properties page = tif.pages.first assert 'JPEGTables' in page.tags assert not page.is_reduced assert not page.is_tiled assert page.compression == COMPRESSION.JPEG assert page.imagewidth == 512 assert page.imagelength == 512 assert page.bitspersample == 8 # assert series properties series = tif.series[0] assert series.shape == (512, 512, 3) assert series.dtype == numpy.uint8 assert series.axes == 'YXS' assert series.kind == 'uniform' # assert data # with pytest.raises((ValueError, NotImplementedError)): image = tif.asarray() assert image.flags['C_CONTIGUOUS'] assert_aszarr_method(tif, image) assert__str__(tif) @pytest.mark.skipif( SKIP_PRIVATE or SKIP_CODECS or not imagecodecs.JPEG.available, reason=REASON, ) def test_read_jpeg_ycbcr(): """Test read YCBCR JPEG is returned as RGB.""" fname = private_file('jpeg/jpeg_ycbcr.tiff') with TiffFile(fname) as tif: assert len(tif.pages) == 1 page = tif.pages.first assert page.compression == COMPRESSION.JPEG assert page.photometric == PHOTOMETRIC.YCBCR assert page.planarconfig == PLANARCONFIG.CONTIG assert page.imagewidth == 128 assert page.imagelength == 80 assert page.bitspersample == 8 assert page.samplesperpixel == 3 # assert data image = tif.asarray() assert image.flags['C_CONTIGUOUS'] assert image.shape == (80, 128, 3) assert image.dtype == numpy.uint8 assert tuple(image[50, 50, :]) == (177, 149, 210) # YCBCR (164, 154, 137) assert_aszarr_method(tif, image) assert_decode_method(page) assert__str__(tif) @pytest.mark.skipif( SKIP_PRIVATE or SKIP_CODECS or not imagecodecs.JPEG.available, reason=REASON, ) @pytest.mark.parametrize( 'fname', ['tiff_tiled_cmyk_jpeg.tif', 'tiff_strip_cmyk_jpeg.tif'] ) def test_read_jpeg_cmyk(fname): """Test read JPEG compressed CMYK image.""" with TiffFile(private_file(f'pillow/{fname}')) as tif: assert len(tif.pages) == 1 page = tif.pages.first assert page.compression == COMPRESSION.JPEG assert page.photometric == PHOTOMETRIC.SEPARATED assert page.shape == (100, 100, 4) assert page.dtype == numpy.uint8 data = page.asarray() assert data.shape == (100, 100, 4) assert data.dtype == numpy.uint8 assert tuple(data[46, 49]) == (79, 230, 222, 77) assert_aszarr_method(tif, data) # assert_decode_method(page) assert__str__(tif) @pytest.mark.skipif( SKIP_PRIVATE or SKIP_CODECS or not imagecodecs.JPEG8.available, reason=REASON, ) def test_read_jpeg12_mandril(): """Test read JPEG 12-bit compression.""" # JPEG 12-bit fname = private_file('jpeg/jpeg12_mandril.tif') with TiffFile(fname) as tif: assert len(tif.pages) == 1 page = tif.pages.first assert page.compression == COMPRESSION.JPEG assert page.photometric == PHOTOMETRIC.YCBCR assert page.imagewidth == 512 assert page.imagelength == 480 assert page.bitspersample == 12 assert page.samplesperpixel == 3 # assert data image = tif.asarray() assert image.flags['C_CONTIGUOUS'] assert image.shape == (480, 512, 3) assert image.dtype == numpy.uint16 assert tuple(image[128, 128, :]) == (1685, 1859, 1376) # YCBCR (1752, 1836, 2000) assert_aszarr_method(tif, image) assert__str__(tif) @pytest.mark.skipif( SKIP_PRIVATE or SKIP_CODECS or SKIP_LARGE or not imagecodecs.JPEG.available, reason=REASON, ) def test_read_jpeg_lsb2msb(): """Test read huge tiled, JPEG compressed, with lsb2msb specified. Also test JPEG with RGB photometric. """ fname = private_file('large/jpeg_lsb2msb.tif') with TiffFile(fname) as tif: assert len(tif.pages) == 1 page = tif.pages.first assert page.compression == COMPRESSION.JPEG assert page.photometric == PHOTOMETRIC.RGB assert not page.is_jfif assert page.imagewidth == 49128 assert page.imagelength == 59683 assert page.bitspersample == 8 assert page.samplesperpixel == 3 # assert data image = tif.asarray() assert image.flags['C_CONTIGUOUS'] assert image.shape == (59683, 49128, 3) assert image.dtype == numpy.uint8 assert tuple(image[38520, 43767, :]) == (255, 255, 255) assert tuple(image[47866, 30076, :]) == (52, 39, 23) assert__str__(tif) @pytest.mark.skipif( SKIP_PRIVATE or SKIP_CODECS or not imagecodecs.JPEG.available or not imagecodecs.JPEG2K.available, reason=REASON, ) def test_read_aperio_j2k(): """Test read SVS slide with J2K compression.""" fname = private_file('slides/CMU-1-JP2K-33005.tif') with TiffFile(fname) as tif: assert tif.is_svs assert len(tif.pages) == 6 page = tif.pages.first assert page.compression == COMPRESSION.APERIO_JP2000_RGB assert page.photometric == PHOTOMETRIC.RGB assert page.planarconfig == PLANARCONFIG.CONTIG assert page.shape == (32893, 46000, 3) assert page.dtype == numpy.uint8 page = tif.pages[1] assert page.compression == COMPRESSION.JPEG assert page.photometric == PHOTOMETRIC.RGB assert page.planarconfig == PLANARCONFIG.CONTIG assert page.shape == (732, 1024, 3) assert page.dtype == numpy.uint8 page = tif.pages[2] assert page.compression == COMPRESSION.APERIO_JP2000_RGB assert page.photometric == PHOTOMETRIC.RGB assert page.planarconfig == PLANARCONFIG.CONTIG assert page.shape == (8223, 11500, 3) assert page.dtype == numpy.uint8 page = tif.pages[3] assert page.compression == COMPRESSION.APERIO_JP2000_RGB assert page.photometric == PHOTOMETRIC.RGB assert page.planarconfig == PLANARCONFIG.CONTIG assert page.shape == (2055, 2875, 3) assert page.dtype == numpy.uint8 page = tif.pages[4] assert page.is_reduced assert page.compression == COMPRESSION.LZW assert page.photometric == PHOTOMETRIC.RGB assert page.planarconfig == PLANARCONFIG.CONTIG assert page.shape == (463, 387, 3) assert page.dtype == numpy.uint8 page = tif.pages[5] assert page.is_reduced assert page.compression == COMPRESSION.JPEG assert page.photometric == PHOTOMETRIC.RGB assert page.planarconfig == PLANARCONFIG.CONTIG assert page.shape == (431, 1280, 3) assert page.dtype == numpy.uint8 # assert data image = tif.pages[3].asarray() assert image.flags['C_CONTIGUOUS'] assert image.shape == (2055, 2875, 3) assert image.dtype == numpy.uint8 assert image[512, 1024, 0] == 246 assert image[512, 1024, 1] == 245 assert image[512, 1024, 2] == 245 assert_decode_method(tif.pages[3], image) assert__str__(tif) @pytest.mark.skipif(SKIP_PRIVATE, reason=REASON) def test_read_lzma(): """Test read LZMA compression.""" # 512x512, uint8, lzma compression fname = private_file('lzma.tif') with TiffFile(fname) as tif: assert tif.byteorder == '<' assert len(tif.pages) == 1 assert len(tif.series) == 1 # assert page properties page = tif.pages.first assert page.compression == COMPRESSION.LZMA assert page.photometric == PHOTOMETRIC.MINISBLACK assert page.imagewidth == 512 assert page.imagelength == 512 assert page.bitspersample == 8 assert page.samplesperpixel == 1 # assert series properties series = tif.series[0] assert series.shape == (512, 512) assert series.dtype == numpy.uint8 assert series.axes == 'YX' assert series.kind == 'shaped' # assert data data = tif.asarray() assert data.flags['C_CONTIGUOUS'] assert isinstance(data, numpy.ndarray) assert data.shape == (512, 512) assert data.dtype == numpy.uint8 assert data[273, 426] == 151 assert_aszarr_method(tif, data) assert__str__(tif) @pytest.mark.skipif( SKIP_PUBLIC or SKIP_CODECS or not imagecodecs.WEBP.available, reason=REASON ) def test_read_webp(): """Test read WebP compression.""" fname = public_file('GDAL/tif_webp.tif') with TiffFile(fname) as tif: assert len(tif.pages) == 1 page = tif.pages.first assert page.compression == COMPRESSION.WEBP assert page.photometric == PHOTOMETRIC.RGB assert page.planarconfig == PLANARCONFIG.CONTIG assert page.imagewidth == 50 assert page.imagelength == 50 assert page.bitspersample == 8 assert page.samplesperpixel == 3 # assert data image = tif.asarray() assert image.flags['C_CONTIGUOUS'] assert image.shape == (50, 50, 3) assert image.dtype == numpy.uint8 assert image[25, 25, 0] == 92 assert image[25, 25, 1] == 122 assert image[25, 25, 2] == 37 assert_aszarr_method(tif, image) assert__str__(tif) @pytest.mark.skipif( SKIP_PUBLIC or SKIP_CODECS or not imagecodecs.LERC.available, reason=REASON ) def test_read_lerc(): """Test read LERC compression.""" if not hasattr(imagecodecs, 'LERC'): pytest.skip('LERC codec missing') fname = public_file('imagecodecs/rgb.u2.lerc.tif') with TiffFile(fname) as tif: assert len(tif.pages) == 1 page = tif.pages.first assert page.compression == COMPRESSION.LERC assert page.photometric == PHOTOMETRIC.RGB assert page.planarconfig == PLANARCONFIG.CONTIG assert page.imagewidth == 31 assert page.imagelength == 32 assert page.bitspersample == 16 assert page.samplesperpixel == 3 # assert data image = tif.asarray() assert image.flags['C_CONTIGUOUS'] assert image.shape == (32, 31, 3) assert image.dtype == numpy.uint16 assert tuple(image[25, 25]) == (3265, 1558, 2811) assert_aszarr_method(tif, image) assert__str__(tif) @pytest.mark.skipif( SKIP_PUBLIC or SKIP_CODECS or not imagecodecs.ZSTD.available, reason=REASON ) def test_read_zstd(): """Test read ZStd compression.""" fname = public_file('GDAL/byte_zstd.tif') with TiffFile(fname) as tif: assert len(tif.pages) == 1 page = tif.pages.first assert page.compression == COMPRESSION.ZSTD assert page.photometric == PHOTOMETRIC.MINISBLACK assert page.planarconfig == PLANARCONFIG.CONTIG assert page.imagewidth == 20 assert page.imagelength == 20 assert page.bitspersample == 8 assert page.samplesperpixel == 1 # assert data image = tif.asarray() # fails with imagecodecs <= 2018.11.8 assert image.flags['C_CONTIGUOUS'] assert image.shape == (20, 20) assert image.dtype == numpy.uint8 assert image[18, 1] == 247 assert_aszarr_method(tif, image) assert__str__(tif) @pytest.mark.skipif(SKIP_PUBLIC or SKIP_CODECS, reason=REASON) def test_read_jetraw(): """Test read Jetraw compression.""" try: have_jetraw = imagecodecs.JETRAW.available except AttributeError: # requires imagecodecs > 2022.22.2 have_jetraw = False fname = private_file('jetraw/16ms-1.p.tif') with TiffFile(fname) as tif: assert len(tif.pages) == 1 page = tif.pages.first assert page.compression == COMPRESSION.JETRAW assert page.photometric == PHOTOMETRIC.MINISBLACK assert page.planarconfig == PLANARCONFIG.CONTIG assert page.imagewidth == 2304 assert page.imagelength == 2304 assert page.bitspersample == 16 assert page.samplesperpixel == 1 assert__str__(tif) # assert data if not have_jetraw: pytest.skip('Jetraw codec not available') image = tif.asarray() assert image[1490, 1830] == 36554 assert_aszarr_method(tif, image) @pytest.mark.skipif(SKIP_PUBLIC or SKIP_CODECS, reason=REASON) def test_read_pixtiff(): """Test read PIXTIFF compression.""" # https://github.com/haraldk/TwelveMonkeys/issues/307 fname = private_file('PIXTIFF/pixtiff_1bpp.tif') with TiffFile(fname) as tif: page = tif.pages.first assert page.compression == COMPRESSION.PIXTIFF assert page.photometric == PHOTOMETRIC.MINISBLACK assert page.imagewidth == 801 assert page.imagelength == 1313 assert page.bitspersample == 1 assert page.samplesperpixel == 1 assert__str__(tif) image = tif.asarray() assert image.dtype == numpy.bool_ assert image.shape == (1313, 801) assert not image[1000, 700] fname = private_file('PIXTIFF/pixtiff_4bpp.tif') with TiffFile(fname) as tif: page = tif.pages.first assert page.compression == COMPRESSION.PIXTIFF assert page.photometric == PHOTOMETRIC.MINISBLACK assert page.imagewidth == 801 assert page.imagelength == 1313 assert page.bitspersample == 4 assert page.samplesperpixel == 1 assert__str__(tif) image = tif.asarray() assert image.dtype == numpy.uint8 assert image.shape == (1313, 801) assert image[1000, 700] == 6 fname = private_file('PIXTIFF/pixtiff_8bpp_rgb.tif') with TiffFile(fname) as tif: page = tif.pages.first assert page.compression == COMPRESSION.PIXTIFF assert page.photometric == PHOTOMETRIC.RGB assert page.imagewidth == 801 assert page.imagelength == 1313 assert page.bitspersample == 8 assert page.samplesperpixel == 3 assert__str__(tif) image = tif.asarray() assert image.dtype == numpy.uint8 assert image.shape == (1313, 801, 3) assert tuple(image[1000, 700]) == (89, 70, 80) @pytest.mark.skipif( SKIP_PRIVATE or SKIP_CODECS or not imagecodecs.LJPEG.available, reason=REASON, ) def test_read_dng(): """Test read JPEG compressed CFA image in SubIFD.""" fname = private_file('DNG/IMG_0793.DNG') with TiffFile(fname) as tif: assert len(tif.pages) == 1 assert len(tif.series) == 2 page = tif.pages.first assert page.index == 0 assert page.shape == (640, 852, 3) assert page.bitspersample == 8 data = page.asarray() assert_aszarr_method(tif, data) page = tif.pages.first.pages[0] assert page.is_tiled assert page.treeindex == (0, 0) assert page.compression == COMPRESSION.JPEG assert page.photometric == PHOTOMETRIC.CFA assert page.shape == (3024, 4032) assert page.bitspersample == 16 assert page.tags['CFARepeatPatternDim'].value == (2, 2) assert page.tags['CFAPattern'].value == b'\x00\x01\x01\x02' data = page.asarray() assert_aszarr_method(tif.series[1], data) assert__str__(tif) @pytest.mark.skipif( SKIP_PRIVATE or SKIP_CODECS or not imagecodecs.LJPEG.available, reason=REASON, ) def test_read_cfa(): """Test read 14-bit uncompressed and JPEG compressed CFA image.""" fname = private_file('DNG/cinemadng/M14-1451_000085_cDNG_uncompressed.dng') with TiffFile(fname) as tif: assert len(tif.pages) == 1 page = tif.pages.first assert page.compression == 1 assert page.photometric == PHOTOMETRIC.CFA assert page.imagewidth == 960 assert page.imagelength == 540 assert page.bitspersample == 14 assert page.tags['CFARepeatPatternDim'].value == (2, 2) assert page.tags['CFAPattern'].value == b'\x00\x01\x01\x02' data = page.asarray() assert_aszarr_method(tif, data) fname = private_file('DNG/cinemadng/M14-1451_000085_cDNG_compressed.dng') with TiffFile(fname) as tif: assert len(tif.pages) == 1 page = tif.pages.first assert page.compression == COMPRESSION.JPEG assert page.photometric == PHOTOMETRIC.CFA assert page.imagewidth == 960 assert page.imagelength == 540 assert page.bitspersample == 14 assert page.tags['CFARepeatPatternDim'].value == (2, 2) assert page.tags['CFAPattern'].value == b'\x00\x01\x01\x02' image = page.asarray() assert_array_equal(image, data) assert_aszarr_method(tif, data) @pytest.mark.skipif(SKIP_PRIVATE or SKIP_CODECS, reason=REASON) def test_read_lena_be_f16_contig(): """Test read big endian float16 horizontal differencing.""" fname = private_file('PS/lena_be_f16_contig.tif') with TiffFile(fname) as tif: assert tif.byteorder == '>' assert len(tif.pages) == 1 assert len(tif.series) == 1 # assert page properties page = tif.pages.first assert not page.is_reduced assert not page.is_tiled assert page.compression == COMPRESSION.NONE assert page.imagewidth == 512 assert page.imagelength == 512 assert page.bitspersample == 16 assert page.samplesperpixel == 3 # assert series properties series = tif.series[0] assert series.shape == (512, 512, 3) assert series.dtype == numpy.float16 assert series.axes == 'YXS' assert series.kind == 'imagej' # assert data data = tif.asarray(series=0) assert isinstance(data, numpy.ndarray) assert data.flags['C_CONTIGUOUS'] assert data.shape == (512, 512, 3) assert data.dtype == numpy.float16 assert_array_almost_equal(data[256, 256], (0.4563, 0.052856, 0.064819)) assert_aszarr_method(tif, data, series=0) assert_decode_method(page) assert__str__(tif) @pytest.mark.skipif(SKIP_PRIVATE or SKIP_CODECS, reason=REASON) def test_read_lena_be_f16_lzw_planar(): """Test read big endian, float16, LZW, horizontal differencing.""" fname = private_file('PS/lena_be_f16_lzw_planar.tif') with TiffFile(fname, is_imagej=False) as tif: assert tif.byteorder == '>' assert len(tif.pages) == 1 assert len(tif.series) == 1 assert not tif.is_imagej # assert page properties page = tif.pages.first assert not page.is_reduced assert not page.is_tiled assert page.compression == COMPRESSION.LZW assert page.imagewidth == 512 assert page.imagelength == 512 assert page.bitspersample == 16 assert page.samplesperpixel == 3 # assert series properties series = tif.series[0] assert series.shape == (3, 512, 512) assert series.dtype == numpy.float16 assert series.axes == 'SYX' assert series.kind == 'uniform' # assert data data = tif.asarray(series=0) assert isinstance(data, numpy.ndarray) assert data.flags['C_CONTIGUOUS'] assert data.shape == (3, 512, 512) assert data.dtype == numpy.float16 assert_array_almost_equal( data[:, 256, 256], (0.4563, 0.052856, 0.064819) ) assert_aszarr_method(tif, data, series=0) assert_decode_method(page) assert__str__(tif) @pytest.mark.skipif(SKIP_PRIVATE or SKIP_CODECS, reason=REASON) def test_read_lena_be_f32_deflate_contig(): """Test read big endian, float32 horizontal differencing, deflate.""" fname = private_file('PS/lena_be_f32_deflate_contig.tif') with TiffFile(fname, is_imagej=False) as tif: assert tif.byteorder == '>' assert len(tif.pages) == 1 assert len(tif.series) == 1 assert not tif.is_imagej # assert page properties page = tif.pages.first assert not page.is_reduced assert not page.is_tiled assert page.compression == COMPRESSION.ADOBE_DEFLATE assert page.imagewidth == 512 assert page.imagelength == 512 assert page.bitspersample == 32 assert page.samplesperpixel == 3 # assert series properties series = tif.series[0] assert series.shape == (512, 512, 3) assert series.dtype == numpy.float32 assert series.axes == 'YXS' assert series.kind == 'uniform' # assert data data = tif.asarray(series=0) assert isinstance(data, numpy.ndarray) assert data.flags['C_CONTIGUOUS'] assert data.shape == (512, 512, 3) assert data.dtype == numpy.float32 assert_array_almost_equal( data[256, 256], (0.456386, 0.052867, 0.064795) ) assert_aszarr_method(tif, data, series=0) assert__str__(tif) @pytest.mark.skipif(SKIP_PRIVATE or SKIP_CODECS, reason=REASON) def test_read_lena_le_f32_lzw_planar(): """Test read little endian, LZW, float32 horizontal differencing.""" fname = private_file('PS/lena_le_f32_lzw_planar.tif') with TiffFile(fname, is_imagej=False) as tif: assert tif.byteorder == '<' assert len(tif.pages) == 1 assert len(tif.series) == 1 assert not tif.is_imagej # assert page properties page = tif.pages.first assert not page.is_reduced assert not page.is_tiled assert page.compression == COMPRESSION.LZW assert page.imagewidth == 512 assert page.imagelength == 512 assert page.bitspersample == 32 assert page.samplesperpixel == 3 # assert series properties series = tif.series[0] assert series.shape == (3, 512, 512) assert series.dtype == numpy.float32 assert series.axes == 'SYX' assert series.kind == 'uniform' # assert data data = tif.asarray(series=0) assert isinstance(data, numpy.ndarray) assert data.flags['C_CONTIGUOUS'] assert data.shape == (3, 512, 512) assert data.dtype == numpy.float32 assert_array_almost_equal( data[:, 256, 256], (0.456386, 0.052867, 0.064795) ) assert_aszarr_method(tif, data, series=0) assert__str__(tif) @pytest.mark.skipif(SKIP_PRIVATE, reason=REASON) def test_read_lena_be_rgb48(): """Test read RGB48.""" fname = private_file('PS/lena_be_rgb48.tif') with TiffFile(fname) as tif: assert tif.byteorder == '>' assert len(tif.pages) == 1 assert len(tif.series) == 1 # assert page properties page = tif.pages.first assert not page.is_reduced assert not page.is_tiled assert page.compression == COMPRESSION.NONE assert page.imagewidth == 512 assert page.imagelength == 512 assert page.bitspersample == 16 assert page.samplesperpixel == 3 # assert series properties series = tif.series[0] assert series.shape == (512, 512, 3) assert series.dtype == numpy.uint16 assert series.axes == 'YXS' assert series.kind == 'imagej' # assert data data = tif.asarray(series=0) assert isinstance(data, numpy.ndarray) assert data.flags['C_CONTIGUOUS'] assert data.shape == (512, 512, 3) assert data.dtype == numpy.uint16 assert_array_equal(data[256, 256], (46259, 16706, 18504)) assert_aszarr_method(tif, data, series=0) assert__str__(tif) @pytest.mark.skipif(SKIP_PRIVATE or SKIP_LARGE or IS_PYPY, reason=REASON) def test_read_huge_ps5_memmap(): """Test read 30000x30000 float32 contiguous.""" # TODO: segfault on pypy3.7-v7.3.5rc2-win64 fname = private_file('large/huge_ps5.tif') with TiffFile(fname) as tif: assert tif.byteorder == '<' assert len(tif.pages) == 1 assert len(tif.series) == 1 # assert page properties page = tif.pages.first assert page.dataoffsets[0] == 21890 assert page.nbytes == 3600000000 assert not page.is_memmappable # data not aligned! assert page.compression == COMPRESSION.NONE assert page.imagewidth == 30000 assert page.imagelength == 30000 assert page.bitspersample == 32 assert page.samplesperpixel == 1 # assert series properties series = tif.series[0] assert series.shape == (30000, 30000) assert series.dtype == numpy.float32 assert series.axes == 'YX' assert series.kind == 'uniform' # assert data data = tif.asarray(out='memmap') # memmap in a temp file assert isinstance(data, numpy.memmap) assert data.flags['C_CONTIGUOUS'] assert data.shape == (30000, 30000) assert data.dtype == numpy.float32 assert data[6597, 8135] == 0.008780896663665771 assert_aszarr_method(tif, data) del data assert not tif.filehandle.closed assert__str__(tif) @pytest.mark.skipif(SKIP_PUBLIC or SKIP_LARGE, reason=REASON) def test_read_movie(): """Test read 30000 pages, uint16.""" fname = public_file('tifffile/movie.tif') with TiffFile(fname) as tif: assert tif.byteorder == '<' assert len(tif.pages) == 30000 assert len(tif.series) == 1 assert tif.is_uniform # assert series properties series = tif.series[0] assert series.shape == (30000, 64, 64) assert series.dtype == numpy.uint16 assert series.axes == 'IYX' assert series.kind == 'uniform' # assert page properties page = tif.pages[-1] if tif.pages.cache: assert isinstance(page, TiffFrame) else: assert isinstance(page, TiffPage) assert page.shape == (64, 64) page = tif.pages[-3] if tif.pages.cache: assert isinstance(page, TiffFrame) else: assert isinstance(page, TiffPage) # assert data data = tif.pages[29999].asarray() # last frame assert isinstance(data, numpy.ndarray) assert data.flags['C_CONTIGUOUS'] assert data.shape == (64, 64) assert data.dtype == numpy.uint16 assert data[32, 32] == 460 del data # read selected pages # https://github.com/blink1073/tifffile/issues/51 data = tif.asarray(key=[31, 999, 29999]) assert data.flags['C_CONTIGUOUS'] assert data.shape == (3, 64, 64) assert data[2, 32, 32] == 460 del data assert__str__(tif, 0) @pytest.mark.skipif(SKIP_PUBLIC or SKIP_LARGE, reason=REASON) def test_read_movie_memmap(): """Test read 30000 pages memory-mapped.""" fname = public_file('tifffile/movie.tif') with TiffFile(fname) as tif: # assert data data = tif.asarray(out='memmap') assert isinstance(data, numpy.memmap) assert data.flags['C_CONTIGUOUS'] assert data.shape == (30000, 64, 64) assert data.dtype == numpy.dtype('' assert len(tif.pages) == 100000 assert len(tif.series) == 1 # assert series properties series = tif.series[0] assert series.shape == (100000, 64, 64) assert series.dtype == numpy.uint16 assert series.axes == 'TYX' assert series.kind == 'imagej' # assert page properties frame = tif.pages[100] assert isinstance(frame, TiffFrame) # uniform=True assert frame.shape == (64, 64) frame = tif.pages.first assert frame.imagewidth == 64 assert frame.imagelength == 64 assert frame.bitspersample == 16 assert frame.compression == 1 assert frame.shape == (64, 64) assert frame.shaped == (1, 1, 64, 64, 1) assert frame.ndim == 2 assert frame.size == 4096 assert frame.nbytes == 8192 assert frame.axes == 'YX' assert frame._nextifd() == 819200206 assert frame.is_final assert frame.is_contiguous assert frame.is_memmappable assert frame.hash assert frame.decode assert frame.aszarr() # assert ImageJ tags ijmeta = tif.imagej_metadata assert ijmeta is not None assert ijmeta['ImageJ'] == '1.48g' assert round(abs(ijmeta['max'] - 119.0), 7) == 0 assert round(abs(ijmeta['min'] - 86.0), 7) == 0 # assert data data = tif.asarray() assert data.flags['C_CONTIGUOUS'] assert data.shape == (100000, 64, 64) assert data.dtype == numpy.uint16 assert round(abs(data[7310, 25, 25] - 100), 7) == 0 # too slow: assert_aszarr_method(tif, data) del data assert__str__(tif, 0) @pytest.mark.skipif(SKIP_PUBLIC or SKIP_LARGE, reason=REASON) def test_read_chart_bl(): """Test read 13228x18710, 1 bit, no bitspersample tag.""" fname = public_file('tifffile/chart_bl.tif') with TiffFile(fname) as tif: assert tif.byteorder == '<' assert len(tif.pages) == 1 assert len(tif.series) == 1 # assert page properties page = tif.pages.first assert page.compression == COMPRESSION.NONE assert page.imagewidth == 13228 assert page.imagelength == 18710 assert page.bitspersample == 1 assert page.samplesperpixel == 1 assert page.rowsperstrip == 18710 # assert series properties series = tif.series[0] assert series.shape == (18710, 13228) assert series.dtype == numpy.bool_ assert series.axes == 'YX' assert series.kind == 'uniform' # assert data data = tif.asarray() assert isinstance(data, numpy.ndarray) assert data.flags['C_CONTIGUOUS'] assert data.shape == (18710, 13228) assert data.dtype == numpy.bool_ assert data[0, 0] is numpy.bool_(True) assert data[5000, 5000] is numpy.bool_(False) if not SKIP_LARGE: assert_aszarr_method(tif, data) assert__str__(tif) @pytest.mark.skipif(SKIP_PRIVATE or SKIP_LARGE, reason=REASON) def test_read_srtm_20_13(): """Test read 6000x6000 int16 GDAL.""" fname = private_file('large/srtm_20_13.tif') with TiffFile(fname) as tif: assert tif.byteorder == '<' assert len(tif.pages) == 1 assert len(tif.series) == 1 # assert page properties page = tif.pages.first assert page.is_contiguous assert page.compression == COMPRESSION.NONE assert page.imagewidth == 6000 assert page.imagelength == 6000 assert page.bitspersample == 16 assert page.samplesperpixel == 1 assert page.nodata == -32768 assert page.tags['GDAL_NODATA'].value == '-32768' assert page.tags['GeoAsciiParamsTag'].value == 'WGS 84|' # assert series properties series = tif.series[0] assert series.shape == (6000, 6000) assert series.dtype == numpy.int16 assert series.axes == 'YX' assert series.kind == 'uniform' # assert data data = tif.asarray() assert isinstance(data, numpy.ndarray) assert data.flags['C_CONTIGUOUS'] assert data.shape == (6000, 6000) assert data.dtype == numpy.int16 assert data[5199, 5107] == 1019 assert data[0, 0] == -32768 assert_aszarr_method(tif, data) del data assert__str__(tif) @pytest.mark.skipif(SKIP_PRIVATE or SKIP_CODECS or SKIP_LARGE, reason=REASON) def test_read_gel_scan(): """Test read 6976x4992x3 uint8 LZW.""" fname = private_file('large/gel_1-scan2.tif') with TiffFile(fname) as tif: assert tif.byteorder == '<' assert len(tif.pages) == 1 assert len(tif.series) == 1 # assert page properties page = tif.pages.first assert page.photometric == PHOTOMETRIC.RGB assert page.compression == COMPRESSION.LZW assert page.imagewidth == 4992 assert page.imagelength == 6976 assert page.bitspersample == 8 assert page.samplesperpixel == 3 # assert series properties series = tif.series[0] assert series.shape == (6976, 4992, 3) assert series.dtype == numpy.uint8 assert series.axes == 'YXS' assert series.kind == 'uniform' # assert data data = tif.asarray() assert isinstance(data, numpy.ndarray) assert data.flags['C_CONTIGUOUS'] assert data.shape == (6976, 4992, 3) assert data.dtype == numpy.uint8 assert tuple(data[2229, 1080, :]) == (164, 164, 164) assert_aszarr_method(tif, data) del data assert__str__(tif) @pytest.mark.skipif(SKIP_PUBLIC, reason=REASON) def test_read_caspian(): """Test read 3x220x279 float64, RGB, deflate, GDAL.""" fname = public_file('juicypixels/caspian.tif') with TiffFile(fname) as tif: assert tif.byteorder == '<' assert len(tif.pages) == 1 assert len(tif.series) == 1 # assert page properties page = tif.pages.first assert page.photometric == PHOTOMETRIC.RGB assert page.planarconfig == PLANARCONFIG.SEPARATE assert page.compression == COMPRESSION.DEFLATE assert page.imagewidth == 279 assert page.imagelength == 220 assert page.bitspersample == 64 assert page.samplesperpixel == 3 assert page.tags['GDAL_METADATA'].value.startswith('') # assert series properties series = tif.series[0] assert series.shape == (3, 220, 279) assert series.dtype == numpy.float64 assert series.axes == 'SYX' assert series.kind == 'uniform' # assert data data = tif.asarray() assert isinstance(data, numpy.ndarray) assert data.flags['C_CONTIGUOUS'] assert data.shape == (3, 220, 279) assert data.dtype == numpy.float64 assert round(abs(data[2, 100, 140] - 353.0), 7) == 0 assert_aszarr_method(tif, data) assert__str__(tif) @pytest.mark.skipif(SKIP_PUBLIC, reason=REASON) def test_read_subifds_array(): """Test read SubIFDs.""" fname = public_file('Tiff-Library-4J/IFD struct/SubIFDs array E.tif') with TiffFile(fname) as tif: assert len(tif.pages) == 1 # make sure no pyramid was detected assert len(tif.series) == 5 assert tif.series[0].shape == (1500, 2000, 3) assert tif.series[1].shape == (1200, 1600, 3) assert tif.series[2].shape == (900, 1200, 3) assert tif.series[3].shape == (600, 800, 3) assert tif.series[4].shape == (300, 400, 3) page = tif.pages.first assert page.photometric == PHOTOMETRIC.RGB assert page.imagewidth == 2000 assert page.imagelength == 1500 assert page.bitspersample == 8 assert page.samplesperpixel == 3 assert page.tags['SubIFDs'].value == ( 14760220, 18614796, 19800716, 18974964, ) # assert subifds assert len(page.pages) == 4 page = tif.pages.first.pages[0] assert page.photometric == PHOTOMETRIC.RGB assert page.imagewidth == 1600 assert page.imagelength == 1200 assert_aszarr_method(page) page = tif.pages.first.pages[1] assert page.photometric == PHOTOMETRIC.RGB assert page.imagewidth == 1200 assert page.imagelength == 900 assert_aszarr_method(page) page = tif.pages.first.pages[2] assert page.photometric == PHOTOMETRIC.RGB assert page.imagewidth == 800 assert page.imagelength == 600 assert_aszarr_method(page) page = tif.pages.first.pages[3] assert page.photometric == PHOTOMETRIC.RGB assert page.imagewidth == 400 assert page.imagelength == 300 assert_aszarr_method(page) # assert data image = page.asarray() assert image.flags['C_CONTIGUOUS'] assert image.shape == (300, 400, 3) assert image.dtype == numpy.uint8 assert tuple(image[124, 292]) == (236, 109, 95) assert__str__(tif) @pytest.mark.skipif(SKIP_PUBLIC, reason=REASON) def test_read_subifd4(): """Test read BigTIFFSubIFD4.""" fname = public_file('twelvemonkeys/bigtiff/BigTIFFSubIFD4.tif') with TiffFile(fname) as tif: assert len(tif.series) == 1 assert len(tif.pages) == 2 page = tif.pages.first assert page.photometric == PHOTOMETRIC.RGB assert page.imagewidth == 64 assert page.imagelength == 64 assert page.bitspersample == 8 assert page.samplesperpixel == 3 assert page.tags['SubIFDs'].value == (3088,) # assert subifd page = page.pages[0] assert page.photometric == PHOTOMETRIC.RGB assert page.imagewidth == 32 assert page.imagelength == 32 assert page.bitspersample == 8 assert page.samplesperpixel == 3 # assert data image = page.asarray() assert image.flags['C_CONTIGUOUS'] assert image.shape == (32, 32, 3) assert image.dtype == numpy.uint8 assert image[15, 15, 0] == 255 assert image[16, 16, 2] == 0 assert_aszarr_method(page) assert__str__(tif) @pytest.mark.skipif(SKIP_PUBLIC, reason=REASON) def test_read_subifd8(): """Test read BigTIFFSubIFD8.""" fname = public_file('twelvemonkeys/bigtiff/BigTIFFSubIFD8.tif') with TiffFile(fname) as tif: assert len(tif.series) == 1 assert len(tif.pages) == 2 page = tif.pages.first assert page.photometric == PHOTOMETRIC.RGB assert page.imagewidth == 64 assert page.imagelength == 64 assert page.bitspersample == 8 assert page.samplesperpixel == 3 assert page.tags['SubIFDs'].value == (3088,) # assert subifd page = page.pages[0] assert page.photometric == PHOTOMETRIC.RGB assert page.imagewidth == 32 assert page.imagelength == 32 assert page.bitspersample == 8 assert page.samplesperpixel == 3 # assert data image = page.asarray() assert image.flags['C_CONTIGUOUS'] assert image.shape == (32, 32, 3) assert image.dtype == numpy.uint8 assert image[15, 15, 0] == 255 assert image[16, 16, 2] == 0 assert_aszarr_method(page) assert__str__(tif) @pytest.mark.skipif( SKIP_CODECS or not imagecodecs.JPEG.available, reason=REASON ) def test_read_tiles(): """Test iteration over tiles, manually and via page.segments.""" data = numpy.arange(600 * 500 * 3, dtype=numpy.uint8).reshape( (600, 500, 3) ) with TempFileName('read_tiles') as fname: with TiffWriter(fname) as tif: options = dict( tile=(256, 256), photometric=PHOTOMETRIC.RGB, compression=COMPRESSION.JPEG, metadata=None, ) tif.write(data, **options) tif.write( data[::2, ::2], subfiletype=FILETYPE.REDUCEDIMAGE, **options ) with TiffFile(fname) as tif: fh = tif.filehandle for page in tif.pages: segments = page.segments() jpegtables = page.tags.get('JPEGTables', None) if jpegtables is not None: jpegtables = jpegtables.value for index, (offset, bytecount) in enumerate( zip(page.dataoffsets, page.databytecounts) ): fh.seek(offset) data = fh.read(bytecount) tile, indices, shape = page.decode( data, index, jpegtables=jpegtables ) assert_array_equal(tile, next(segments)[0]) @pytest.mark.skipif(SKIP_PRIVATE or SKIP_LARGE, reason=REASON) def test_read_lsm_mosaic(): """Test read LSM: PTZCYX (Mosaic mode), two areas, 32 samples, >4 GB.""" # LSM files are little endian with two series, one of which is reduced RGB # Tags may be unordered or contain bogus values fname = private_file( 'lsm/Twoareas_Zstacks54slices_3umintervals_5cycles.lsm' ) with TiffFile(fname) as tif: assert tif.is_lsm assert tif.byteorder == '<' assert len(tif.pages) == 1080 assert len(tif.series) == 2 # assert page properties page = tif.pages.first assert page.is_lsm assert page.is_contiguous assert page.compression == COMPRESSION.NONE assert page.imagewidth == 512 assert page.imagelength == 512 assert page.bitspersample == 16 assert page.samplesperpixel == 32 # assert strip offsets are corrected page = tif.pages[-2] assert page.dataoffsets[0] == 9070895981 # assert series properties series = tif.series[0] assert series.shape == (2, 5, 54, 32, 512, 512) assert series.dtype == numpy.uint16 assert series.axes == 'PTZCYX' assert series.kind == 'lsm' if 1: series = tif.series[1] assert series.shape == (2, 5, 54, 3, 128, 128) assert series.dtype == numpy.uint8 assert series.axes == 'PTZSYX' assert series.kind == 'lsm' # assert lsm_info tags tags = tif.lsm_metadata assert tags['DimensionX'] == 512 assert tags['DimensionY'] == 512 assert tags['DimensionZ'] == 54 assert tags['DimensionTime'] == 5 assert tags['DimensionChannels'] == 32 # assert lsm_scan_info tags tags = tif.lsm_metadata['ScanInformation'] assert tags['ScanMode'] == 'Stack' assert tags['User'] == 'lfdguest1' # very slow: assert_aszarr_method(tif) assert__str__(tif, 0) @pytest.mark.skipif(SKIP_PRIVATE or SKIP_LARGE, reason=REASON) def test_read_lsm_carpet(): """Test read LSM: TYX (time series x-y), 72000 pages.""" # reads very slowly, ensure colormap is not applied fname = private_file('lsm/Cardarelli_carpet_3.lsm') with TiffFile(fname) as tif: assert tif.is_lsm assert tif.byteorder == '<' assert len(tif.pages) == 72000 assert len(tif.series) == 2 # assert page properties page = tif.pages.first assert page.is_lsm assert 'ColorMap' in page.tags assert page.photometric == PHOTOMETRIC.PALETTE assert page.compression == COMPRESSION.NONE assert page.imagewidth == 32 assert page.imagelength == 10 assert page.bitspersample == 8 assert page.samplesperpixel == 1 # assert series properties series = tif.series[0] assert series.dtype == numpy.uint8 assert series.shape == (36000, 10, 32) assert series.axes == 'TYX' assert series.kind == 'lsm' assert series.get_shape(False) == (36000, 1, 10, 32) assert series.get_axes(False) == 'TCYX' if 1: series = tif.series[1] assert series.dtype == numpy.uint8 assert series.shape == (36000, 3, 40, 128) assert series.axes == 'TSYX' assert series.get_shape(False) == (36000, 3, 40, 128) assert series.get_axes(False) == 'TSYX' assert series.kind == 'lsm' # assert lsm_info tags tags = tif.lsm_metadata assert tags['DimensionX'] == 32 assert tags['DimensionY'] == 10 assert tags['DimensionZ'] == 1 assert tags['DimensionTime'] == 36000 assert tags['DimensionChannels'] == 1 # assert lsm_scan_info tags tags = tif.lsm_metadata['ScanInformation'] assert tags['ScanMode'] == 'Plane' assert tags['User'] == 'LSM User' # assert_aszarr_method(tif) assert__str__(tif, 0) @pytest.mark.skipif(SKIP_PRIVATE or SKIP_LARGE, reason=REASON) def test_read_lsm_sfcs(): """Test read LSM linescan: TX (time series y), 1 page.""" # second page/series is corrupted: ImageLength=128 but StripByteCounts=1 fname = private_file('lsm/sFCS_780_2069Hz_3p94us.lsm') with TiffFile(fname) as tif: assert tif.is_lsm assert tif.byteorder == '<' assert len(tif.pages) == 2 assert len(tif.series) == 2 # assert page properties page = tif.pages.first assert page.is_lsm assert 'ColorMap' in page.tags assert page.photometric == PHOTOMETRIC.PALETTE assert page.compression == COMPRESSION.NONE assert page.imagewidth == 52 assert page.imagelength == 100000 assert page.bitspersample == 16 assert page.samplesperpixel == 1 # assert series properties series = tif.series[0] assert series.dtype == numpy.uint16 assert series.shape == (100000, 52) assert series.axes == 'TX' assert series.kind == 'lsm' assert series.get_shape(False) == (1, 1, 1, 100000, 52) assert series.get_axes(False) == 'MPCTX' data = series.asarray() assert data.shape == (100000, 52) assert data.dtype == numpy.uint16 assert data[1000].sum() == 4 if 1: series = tif.series[1] assert series.dtype == numpy.uint8 assert series.shape == (3, 128, 1) assert series.axes == 'SYX' assert series.get_shape(False) == (3, 128, 1) assert series.get_axes(False) == 'SYX' assert series.kind == 'lsm' with pytest.raises(TiffFileError): # strip cannot be reshaped from (1,) to (1, 128, 1, 1) series.asarray() # assert lsm_info tags tags = tif.lsm_metadata assert tags['DimensionX'] == 52 assert tags['DimensionY'] == 1 assert tags['DimensionZ'] == 1 assert tags['DimensionTime'] == 100000 assert tags['DimensionChannels'] == 1 # assert lsm_scan_info tags tags = tif.lsm_metadata['ScanInformation'] assert tags['ScanMode'] == 'Line' assert tags['User'] == 'LSM User' # assert_aszarr_method(tif) assert__str__(tif, 0) @pytest.mark.skipif(SKIP_PRIVATE or SKIP_LARGE, reason=REASON) def test_read_lsm_line_2channel(): """Test read LSM point scan with 2 channels: CTX, 1 page.""" # https://github.com/cgohlke/tifffile/issues/269 # second page/series is corrupted: ImageLength=128 but StripByteCounts=1 fname = private_file('lsm/issue269.lsm') with TiffFile(fname) as tif: assert tif.is_lsm assert tif.byteorder == '<' assert len(tif.pages) == 2 assert len(tif.series) == 2 # assert page properties page = tif.pages.first assert page.is_lsm assert 'ColorMap' not in page.tags assert page.photometric == PHOTOMETRIC.RGB assert page.compression == COMPRESSION.NONE assert page.imagewidth == 512 assert page.imagelength == 70000 assert page.bitspersample == 8 assert page.samplesperpixel == 2 # assert series properties series = tif.series[0] assert series.dtype == numpy.uint8 assert series.shape == (2, 70000, 512) assert series.axes == 'CTX' assert series.kind == 'lsm' assert series.get_axes(False) == 'MPCTX' assert series.get_shape(False) == (1, 1, 2, 70000, 512) data = series.asarray() assert data.shape == (2, 70000, 512) assert data.dtype == numpy.uint8 assert data[0, 1000].sum() == 13241 if 1: series = tif.series[1] assert series.dtype == numpy.uint8 assert series.shape == (3, 128, 1) assert series.axes == 'SYX' assert series.get_shape(False) == (3, 128, 1) assert series.get_axes(False) == 'SYX' assert series.kind == 'lsm' with pytest.raises(TiffFileError): # strip cannot be reshaped from (1,) to (1, 128, 1, 1) series.asarray() # assert lsm_info tags tags = tif.lsm_metadata assert tags['DimensionX'] == 512 assert tags['DimensionY'] == 1 assert tags['DimensionZ'] == 1 assert tags['DimensionTime'] == 70000 assert tags['DimensionChannels'] == 2 # assert lsm_scan_info tags tags = tif.lsm_metadata['ScanInformation'] assert tags['ScanMode'] == 'Line' assert tags['User'] == 'LSMUser' assert_aszarr_method(tif) assert__str__(tif, 0) @pytest.mark.skipif(SKIP_PRIVATE, reason=REASON) def test_read_lsm_take1(): """Test read LSM: TZCYX (Plane mode), single image, uint8.""" fname = private_file('lsm/take1.lsm') with TiffFile(fname) as tif: assert tif.is_lsm assert tif.byteorder == '<' assert len(tif.pages) == 2 assert len(tif.series) == 2 # assert page properties page = tif.pages.first assert page.is_lsm assert page.is_contiguous assert page.compression == COMPRESSION.NONE assert page.imagewidth == 512 assert page.imagelength == 512 assert page.bitspersample == 8 assert page.samplesperpixel == 1 page = tif.pages[1] assert page.is_reduced assert page.photometric == PHOTOMETRIC.RGB assert page.planarconfig == PLANARCONFIG.SEPARATE assert page.compression == COMPRESSION.NONE assert page.imagewidth == 128 assert page.imagelength == 128 assert page.samplesperpixel == 3 assert page.bitspersample == 8 # assert series properties series = tif.series[0] assert series.dtype == numpy.uint8 assert series.shape == (512, 512) assert series.axes == 'YX' assert series.kind == 'lsm' assert series.get_shape(False) == (1, 1, 512, 512) assert series.get_axes(False) == 'ZCYX' if 1: series = tif.series[1] assert series.shape == (3, 128, 128) assert series.dtype == numpy.uint8 assert series.axes == 'SYX' assert series.kind == 'lsm' # assert data data = tif.asarray() assert isinstance(data, numpy.ndarray) assert data.flags['C_CONTIGUOUS'] assert data.shape == (512, 512) assert data.dtype == numpy.uint8 assert data[256, 256] == 101 if 1: data = tif.asarray(series=1) assert isinstance(data, numpy.ndarray) assert data.shape == (3, 128, 128) assert data.dtype == numpy.uint8 assert tuple(data[..., 64, 64]) == (89, 89, 89) # assert lsm_info tags tags = tif.lsm_metadata assert tags['DimensionX'] == 512 assert tags['DimensionY'] == 512 assert tags['DimensionZ'] == 1 assert tags['DimensionTime'] == 1 assert tags['DimensionChannels'] == 1 # assert lsm_scan_info tags tags = tif.lsm_metadata['ScanInformation'] assert tags['ScanMode'] == 'Plane' assert tags['User'] == 'LSM User' assert len(tags['Tracks']) == 1 assert len(tags['Tracks'][0]['DataChannels']) == 1 track = tags['Tracks'][0] assert track['DataChannels'][0]['Name'] == 'Ch1' assert track['DataChannels'][0]['BitsPerSample'] == 8 assert len(track['IlluminationChannels']) == 1 assert track['IlluminationChannels'][0]['Name'] == '561' assert track['IlluminationChannels'][0]['Wavelength'] == 561.0 assert_aszarr_method(tif) assert_aszarr_method(tif, series=1) assert__str__(tif) @pytest.mark.skipif(SKIP_PUBLIC, reason=REASON) def test_read_lsm_2chzt(): """Test read LSM: ZCYX (Stack mode) uint8.""" fname = public_file('scif.io/2chZT.lsm') with TiffFile(fname) as tif: assert tif.is_lsm assert tif.byteorder == '<' assert len(tif.pages) == 798 assert len(tif.series) == 2 # assert page properties page = tif.pages.first assert page.is_lsm assert page.is_contiguous assert page.photometric == PHOTOMETRIC.RGB assert page.tags['StripOffsets'].value[2] == 242632 # bogus offset assert page.tags['StripByteCounts'].value[2] == 0 # no strip data assert page.compression == COMPRESSION.NONE assert page.imagewidth == 400 assert page.imagelength == 300 assert page.bitspersample == 8 assert page.samplesperpixel == 2 page = tif.pages[1] assert page.is_reduced assert page.photometric == PHOTOMETRIC.RGB assert page.planarconfig == PLANARCONFIG.SEPARATE assert page.is_contiguous assert page.compression == COMPRESSION.NONE assert page.imagewidth == 128 assert page.imagelength == 96 assert page.samplesperpixel == 3 assert page.bitspersample == 8 # assert series properties series = tif.series[0] assert series.shape == (19, 21, 2, 300, 400) assert series.dtype == numpy.uint8 assert series.axes == 'TZCYX' assert series.kind == 'lsm' if 1: series = tif.series[1] assert series.shape == (19, 21, 3, 96, 128) assert series.dtype == numpy.uint8 assert series.axes == 'TZSYX' assert series.kind == 'lsm' # assert data data = tif.asarray(out='memmap') assert isinstance(data, numpy.memmap) assert data.flags['C_CONTIGUOUS'] assert data.shape == (19, 21, 2, 300, 400) assert data.dtype == numpy.uint8 assert data[18, 20, 1, 199, 299] == 39 if 1: data = tif.asarray(series=1) assert isinstance(data, numpy.ndarray) assert data.shape == (19, 21, 3, 96, 128) assert data.dtype == numpy.uint8 assert tuple(data[18, 20, :, 64, 96]) == (22, 22, 0) del data # assert lsm_info tags tags = tif.lsm_metadata assert tags['DimensionX'] == 400 assert tags['DimensionY'] == 300 assert tags['DimensionZ'] == 21 assert tags['DimensionTime'] == 19 assert tags['DimensionChannels'] == 2 # assert lsm_scan_info tags tags = tif.lsm_metadata['ScanInformation'] assert tags['ScanMode'] == 'Stack' assert tags['User'] == 'zjfhe' assert len(tags['Tracks']) == 3 assert len(tags['Tracks'][0]['DataChannels']) == 1 track = tags['Tracks'][0] assert track['DataChannels'][0]['Name'] == 'Ch3' assert track['DataChannels'][0]['BitsPerSample'] == 8 assert len(track['IlluminationChannels']) == 6 assert track['IlluminationChannels'][5]['Name'] == '488' assert track['IlluminationChannels'][5]['Wavelength'] == 488.0 assert_aszarr_method(tif) assert_aszarr_method(tif, series=1) assert__str__(tif, 0) @pytest.mark.skipif(SKIP_PRIVATE, reason=REASON) def test_read_lsm_unbounderror(): """Test read LSM: MZSYX (196, 2, 33, 512, 512).""" # file is > 4GB with no time axis fname = private_file('lsm/48h-CM-C-2.lsm') with TiffFile(fname) as tif: assert tif.is_lsm assert tif.byteorder == '<' assert len(tif.pages) == 784 assert len(tif.series) == 2 # assert page properties page = tif.pages.first assert page.is_lsm assert page.is_contiguous assert page.photometric == PHOTOMETRIC.RGB assert page.planarconfig == PLANARCONFIG.SEPARATE assert page.imagewidth == 512 assert page.imagelength == 512 assert page.bitspersample == 16 assert page.samplesperpixel == 33 page = tif.pages[1] assert page.is_reduced assert page.photometric == PHOTOMETRIC.RGB assert page.planarconfig == PLANARCONFIG.SEPARATE assert page.compression == COMPRESSION.NONE assert page.imagewidth == 128 assert page.imagelength == 128 assert page.samplesperpixel == 3 assert page.bitspersample == 8 # assert series properties series = tif.series[0] assert series.shape == (196, 2, 33, 512, 512) assert series.dtype == numpy.uint16 assert series.axes == 'MZCYX' assert series.get_axes(False) == 'MPZCYX' assert series.get_shape(False) == (196, 1, 2, 33, 512, 512) assert series.kind == 'lsm' if 1: series = tif.series[1] assert series.shape == (196, 2, 3, 128, 128) assert series.dtype == numpy.uint8 assert series.axes == 'MZSYX' assert series.get_axes(False) == 'MPZSYX' assert series.get_shape(False) == (196, 1, 2, 3, 128, 128) assert series.kind == 'lsm' # assert data data = tif.asarray() assert isinstance(data, numpy.ndarray) assert data.flags['C_CONTIGUOUS'] assert data.shape == (196, 2, 33, 512, 512) assert data.dtype == numpy.uint16 assert data[195, 1, 32, 511, 511] == 22088 # assert_aszarr_method(tif, data) if 1: data = tif.asarray(series=1) assert isinstance(data, numpy.ndarray) assert data.shape == (196, 2, 3, 128, 128) assert data.dtype == numpy.uint8 assert tuple(data[195, 1, :, 127, 127]) == (0, 61, 52) # assert_aszarr_method(tif, series=1) # assert lsm_info tags tags = tif.lsm_metadata assert tags['DimensionX'] == 512 assert tags['DimensionY'] == 512 assert tags['DimensionZ'] == 2 assert tags['DimensionTime'] == 1 assert tags['DimensionChannels'] == 33 # assert lsm_scan_info tags tags = tif.lsm_metadata['ScanInformation'] assert tags['ScanMode'] == 'Stack' assert tags['User'] == 'lfdguest1' assert__str__(tif) @pytest.mark.skipif(SKIP_PRIVATE, reason=REASON) def test_read_lsm_incomplete(caplog): """Test read LSM: MZSYX (196, 2, 33, 512, 512).""" # file is ~18 GB incompletely written, dataoffsets zeroed fname = private_file('lsm/NTERT_NM-50_x2.lsm') with TiffFile(fname) as tif: assert 'LSM file incompletely written' in caplog.text assert tif.is_lsm assert tif.byteorder == '<' assert len(tif.pages) == 2450 assert len(tif.series) == 2 # assert page properties page = tif.pages.first assert page.is_lsm assert page.is_contiguous assert page.photometric == PHOTOMETRIC.RGB assert page.planarconfig == PLANARCONFIG.SEPARATE assert page.imagewidth == 512 assert page.imagelength == 512 assert page.bitspersample == 16 assert page.samplesperpixel == 33 page = tif.pages[1] assert page.is_reduced assert page.photometric == PHOTOMETRIC.RGB assert page.planarconfig == PLANARCONFIG.SEPARATE assert page.compression == COMPRESSION.NONE assert page.imagewidth == 128 assert page.imagelength == 128 assert page.samplesperpixel == 3 assert page.bitspersample == 8 # assert series properties series = tif.series[0] assert series.pages[-1].dataoffsets[0] == 0 assert series.shape == (25, 49, 33, 512, 512) assert series.dtype == numpy.uint16 assert series.axes == 'MTCYX' assert series.get_axes(False) == 'MPTCYX' assert series.get_shape(False) == (25, 1, 49, 33, 512, 512) assert series.kind == 'lsm' if 1: series = tif.series[1] assert series.shape == (25, 49, 3, 128, 128) assert series.get_shape(False) == (25, 1, 49, 3, 128, 128) assert series.dtype == numpy.uint8 assert series.axes == 'MTSYX' assert series.get_axes(False) == 'MPTSYX' assert series.kind == 'lsm' # assert data data = tif.asarray() assert isinstance(data, numpy.ndarray) assert data.flags['C_CONTIGUOUS'] assert data.shape == (25, 49, 33, 512, 512) assert data.dtype == numpy.uint16 assert data[24, 45, 32, 511, 511] == 14648 # assert_aszarr_method(tif, data) if 1: data = tif.asarray(series=1) assert isinstance(data, numpy.ndarray) assert data.shape == (25, 49, 3, 128, 128) assert data.dtype == numpy.uint8 assert numpy.sum(data[24, 45]) == 0 # assert_aszarr_method(tif, series=1) # assert lsm_info tags tags = tif.lsm_metadata assert tags['DimensionX'] == 512 assert tags['DimensionY'] == 512 assert tags['DimensionZ'] == 1 assert tags['DimensionTime'] == 49 assert tags['DimensionChannels'] == 33 # assert lsm_scan_info tags tags = tif.lsm_metadata['ScanInformation'] assert tags['ScanMode'] == 'Plane' assert tags['User'] == 'lfdguest1' assert__str__(tif) @pytest.mark.skipif(SKIP_PRIVATE or SKIP_CODECS, reason=REASON) def test_read_lsm_earpax2isl11(): """Test read LSM: ZCYX (19, 3, 512, 512) uint8, RGB, LZW.""" fname = private_file('lsm/earpax2isl11.lzw.lsm') with TiffFile(fname) as tif: assert tif.is_lsm assert tif.byteorder == '<' assert len(tif.pages) == 38 assert len(tif.series) == 2 # assert page properties page = tif.pages.first assert page.is_lsm assert not page.is_contiguous assert page.photometric == PHOTOMETRIC.RGB assert page.compression == COMPRESSION.LZW assert page.imagewidth == 512 assert page.imagelength == 512 assert page.bitspersample == 8 assert page.samplesperpixel == 3 # assert corrected strip_byte_counts assert page.tags['StripByteCounts'].value == (262144, 262144, 262144) assert page.databytecounts == (131514, 192933, 167874) page = tif.pages[1] assert page.is_reduced assert page.photometric == PHOTOMETRIC.RGB assert page.planarconfig == PLANARCONFIG.SEPARATE assert page.compression == COMPRESSION.NONE assert page.imagewidth == 128 assert page.imagelength == 128 assert page.samplesperpixel == 3 assert page.bitspersample == 8 # assert series properties series = tif.series[0] assert series.shape == (19, 3, 512, 512) assert series.dtype == numpy.uint8 assert series.axes == 'ZCYX' assert series.get_axes(False) == 'ZCYX' assert series.get_shape(False) == (19, 3, 512, 512) assert series.kind == 'lsm' if 1: series = tif.series[1] assert series.shape == (19, 3, 128, 128) assert series.get_shape(False) == (19, 3, 128, 128) assert series.dtype == numpy.uint8 assert series.axes == 'ZSYX' assert series.get_axes(False) == 'ZSYX' assert series.kind == 'lsm' # assert data data = tif.asarray() assert isinstance(data, numpy.ndarray) assert data.flags['C_CONTIGUOUS'] assert data.shape == (19, 3, 512, 512) assert data.dtype == numpy.uint8 assert tuple(data[18, :, 200, 320]) == (17, 22, 21) assert_aszarr_method(tif, data) if 1: data = tif.asarray(series=1) assert isinstance(data, numpy.ndarray) assert data.shape == (19, 3, 128, 128) assert data.dtype == numpy.uint8 assert tuple(data[18, :, 64, 64]) == (25, 5, 33) assert_aszarr_method(tif, series=1) # assert lsm_info tags tags = tif.lsm_metadata assert tags['DimensionX'] == 512 assert tags['DimensionY'] == 512 assert tags['DimensionZ'] == 19 assert tags['DimensionTime'] == 1 assert tags['DimensionChannels'] == 3 # assert lsm_scan_info tags tags = tif.lsm_metadata['ScanInformation'] assert tags['ScanMode'] == 'Stack' assert tags['User'] == 'megason' assert__str__(tif) @pytest.mark.skipif(SKIP_PUBLIC or SKIP_CODECS or SKIP_LARGE, reason=REASON) def test_read_lsm_mb231paxgfp_060214(): """Test read LSM with many LZW compressed pages.""" # TZCYX (Stack mode), (60, 31, 2, 512, 512), 3720 fname = public_file('tifffile/MB231paxgfp_060214.lzw.lsm') with TiffFile(fname) as tif: assert tif.is_lsm assert tif.byteorder == '<' assert len(tif.pages) == 3720 assert len(tif.series) == 2 # assert page properties page = tif.pages.first assert page.is_lsm assert not page.is_contiguous assert page.compression == COMPRESSION.LZW assert page.imagewidth == 512 assert page.imagelength == 512 assert page.bitspersample == 16 assert page.samplesperpixel == 2 page = tif.pages[1] assert page.is_reduced assert page.photometric == PHOTOMETRIC.RGB assert page.planarconfig == PLANARCONFIG.SEPARATE assert page.compression == COMPRESSION.NONE assert page.imagewidth == 128 assert page.imagelength == 128 assert page.samplesperpixel == 3 assert page.bitspersample == 8 # assert series properties series = tif.series[0] assert series.dtype == numpy.uint16 assert series.shape == (60, 31, 2, 512, 512) assert series.get_shape(False) == (60, 31, 2, 512, 512) assert series.axes == 'TZCYX' assert series.get_axes(False) == 'TZCYX' assert series.kind == 'lsm' if 1: series = tif.series[1] assert series.dtype == numpy.uint8 assert series.shape == (60, 31, 3, 128, 128) assert series.axes == 'TZSYX' assert series.kind == 'lsm' # assert data data = tif.asarray(out='memmap', maxworkers=None) assert isinstance(data, numpy.memmap) assert data.flags['C_CONTIGUOUS'] assert data.shape == (60, 31, 2, 512, 512) assert data.dtype == numpy.dtype(' 4 GB Hamamatsu NDPI slide, JPEG 103680x188160.""" fname = private_file('HamamatsuNDPI/103680x188160.ndpi') with TiffFile(fname) as tif: assert tif.is_ndpi assert len(tif.pages) == 8 assert len(tif.series) == 3 for page in tif.pages: assert page.ndpi_tags['Model'] == 'C13220' # first page page = tif.pages.first assert page.offset == 4466602683 assert page.is_ndpi assert page.databytecounts[0] == 5105 # not 4461521316 assert page.photometric == PHOTOMETRIC.YCBCR assert page.compression == COMPRESSION.JPEG assert page.shape == (103680, 188160, 3) assert ( page.tags['ImageLength'].offset - page.tags['ImageWidth'].offset == 12 ) assert page.tags['ImageWidth'].offset == 4466602685 assert page.tags['ImageWidth'].valueoffset == 4466602693 assert page.tags['ImageLength'].offset == 4466602697 assert page.tags['ImageLength'].valueoffset == 4466602705 assert page.tags['ReferenceBlackWhite'].offset == 4466602889 assert page.tags['ReferenceBlackWhite'].valueoffset == 1003 assert page.ndpi_tags['Magnification'] == 40.0 assert page.ndpi_tags['McuStarts'][-1] == 4461516507 # corrected with pytest.raises(ValueError): page.tags['StripByteCounts'].astuple() if not SKIP_ZARR: # data = page.asarray() # 55 GB with page.aszarr() as store: data = zarr.open(store, mode='r') assert data[38061, 121978].tolist() == [220, 167, 187] # page 7 page = tif.pages[7] assert page.is_ndpi assert page.photometric == PHOTOMETRIC.MINISBLACK assert page.compression == COMPRESSION.NONE assert page.shape == (200, 600) assert page.ndpi_tags['Magnification'] == -2.0 # assert page.asarray()[226, 629, 0] == 167 # first series series = tif.series[0] assert series.kind == 'ndpi' assert series.name == 'S10533009' assert series.shape == (103680, 188160, 3) assert series.is_pyramidal assert len(series.levels) == 6 assert len(series.pages) == 1 # pyramid levels assert series.levels[1].shape == (51840, 94080, 3) assert series.levels[2].shape == (25920, 47040, 3) assert series.levels[3].shape == (12960, 23520, 3) assert series.levels[4].shape == (6480, 11760, 3) assert series.levels[5].shape == (3240, 5880, 3) data = series.levels[5].asarray() assert tuple(data[1000, 1000]) == (222, 165, 200) with pytest.raises(ValueError): page.tags['StripOffsets'].astuple() # cannot decode base levels since JPEG compressed size > 2 GB # series.levels[0].asarray() assert_aszarr_method(series.levels[5], data) assert__str__(tif) @pytest.mark.skipif( SKIP_PRIVATE or SKIP_CODECS or not imagecodecs.JPEGXR.available, reason=REASON, ) def test_read_ndpi_jpegxr(): """Test read Hamamatsu NDPI slide with JPEG XR compression.""" # https://downloads.openmicroscopy.org/images/Hamamatsu-NDPI/hamamatsu/ fname = private_file('HamamatsuNDPI/DM0014 - 2020-04-02 10.25.21.ndpi') with TiffFile(fname) as tif: assert tif.is_ndpi assert len(tif.pages) == 6 assert len(tif.series) == 3 for page in tif.pages: assert page.ndpi_tags['Model'] == 'C13210' for page in tif.pages[:4]: # check that all levels are corrected assert page.is_ndpi assert ( page.tags['PhotometricInterpretation'].value == PHOTOMETRIC.YCBCR ) assert page.tags['BitsPerSample'].value == (8, 8, 8) assert page.samplesperpixel == 1 # not 3 assert page.bitspersample == 16 # not 8 assert page.photometric == PHOTOMETRIC.MINISBLACK # not YCBCR assert page.compression == COMPRESSION.JPEGXR_NDPI # first page page = tif.pages.first assert page.shape == (34944, 69888) # not (34944, 69888, 3) assert page.databytecounts[0] == 632009 assert page.ndpi_tags['CaptureMode'] == 17 assert page.ndpi_tags['Magnification'] == 20.0 if not SKIP_ZARR: with page.aszarr() as store: data = zarr.open(store, mode='r') assert data[28061, 41978] == 6717 # page 5 page = tif.pages[5] assert page.is_ndpi assert page.photometric == PHOTOMETRIC.MINISBLACK assert page.compression == COMPRESSION.NONE assert page.shape == (192, 566) assert page.ndpi_tags['Magnification'] == -2.0 # first series series = tif.series[0] assert series.kind == 'ndpi' assert series.name == 'DM0014' assert series.shape == (34944, 69888) assert series.is_pyramidal assert len(series.levels) == 4 assert len(series.pages) == 1 # pyramid levels assert series.levels[1].shape == (17472, 34944) assert series.levels[2].shape == (8736, 17472) assert series.levels[3].shape == (4368, 8736) data = series.levels[3].asarray() assert data[1000, 1000] == 1095 assert_aszarr_method(series.levels[3], data) assert__str__(tif) @pytest.mark.skipif( SKIP_PRIVATE or SKIP_LARGE or SKIP_CODECS or not imagecodecs.JPEG.available, reason=REASON, ) def test_read_ndpi_databytecounts(): """Test read Hamamatsu NDPI slide databytecounts do not overflow.""" # https://forum.image.sc/t/some-ndpi-files-not-opening-in-qupath-v0-4-1 fname = private_file('HamamatsuNDPI/doesnt_work.ndpi') with TiffFile(fname) as tif: assert tif.is_ndpi assert len(tif.pages) == 23 assert len(tif.series) == 3 # first series series = tif.series[0] assert series.kind == 'ndpi' assert series.name == 'Baseline' assert series.shape == (3, 60928, 155648, 3) assert series.is_pyramidal assert len(series.levels) == 7 assert len(series.pages) == 3 # pyramid levels assert series.levels[1].shape == (3, 30464, 77824, 3) assert series.levels[2].shape == (3, 15232, 38912, 3) assert series.levels[3].shape == (3, 7616, 19456, 3) assert series.levels[4].shape == (3, 3808, 9728, 3) assert series.levels[5].shape == (3, 1904, 4864, 3) assert series.levels[6].shape == (3, 952, 2432, 3) # 3rd z-slice in base layer page = series.pages[2] assert page.index == 14 assert page.shape == (60928, 155648, 3) assert page.dataoffsets[-1] == 4718518695 assert page.databytecounts[-1] == 4338 assert page.ndpi_tags['Magnification'] == 40.0 assert page.ndpi_tags['FocusTime'] == 13 assert page.ndpi_tags['ScannerSerialNumber'] == '680057' assert tuple(page.asarray()[60000, 150000]) == (216, 221, 217) @pytest.mark.skipif( SKIP_PRIVATE or SKIP_CODECS or not imagecodecs.JPEG.available, reason=REASON, ) def test_read_ndpi_layers(): """Test read Hamamatsu NDPI slide with 5 layers.""" fname = private_file('HamamatsuNDPI/hamamatsu_5-layers.ndpi') with TiffFile(fname) as tif: assert tif.is_ndpi assert len(tif.pages) == 37 assert len(tif.series) == 3 for page in tif.pages: assert page.ndpi_tags['Model'] == 'C10730-12' for page in tif.pages[:4]: assert page.is_ndpi assert ( page.tags['PhotometricInterpretation'].value == PHOTOMETRIC.YCBCR ) assert page.tags['BitsPerSample'].value == (8, 8, 8) assert page.samplesperpixel == 3 assert page.bitspersample == 8 assert page.photometric == PHOTOMETRIC.YCBCR assert page.compression == COMPRESSION.JPEG # first page page = tif.pages.first assert page.shape == (12032, 11520, 3) assert page.databytecounts[0] == 1634 assert page.ndpi_tags['CaptureMode'] == 0 assert page.ndpi_tags['Magnification'] == 20.0 if not SKIP_ZARR: with page.aszarr() as store: data = zarr.open(store, mode='r') assert list(data[10000, 10000]) == [232, 215, 225] # last page page = tif.pages[-1] assert page.is_ndpi assert page.photometric == PHOTOMETRIC.MINISBLACK assert page.compression == COMPRESSION.NONE assert page.shape == (196, 575) assert page.ndpi_tags['Magnification'] == -2.0 # first series series = tif.series[0] assert series.kind == 'ndpi' assert series.name == 'Baseline' assert series.shape == (7, 12032, 11520, 3) assert series.is_pyramidal assert len(series.levels) == 5 assert len(series.pages) == 7 # pyramid levels assert series.levels[1].shape == (7, 6016, 5760, 3) assert series.levels[2].shape == (7, 3008, 2880, 3) assert series.levels[3].shape == (7, 1504, 1440, 3) assert series.levels[4].shape == (7, 752, 720, 3) data = series.levels[3].asarray() assert_array_equal( data[:, 1000, 1000], [ [234, 217, 225], [231, 213, 225], [233, 215, 227], [234, 217, 225], [234, 216, 228], [233, 216, 224], [232, 214, 226], ], ) assert_aszarr_method(series.levels[3], data) assert__str__(tif) @pytest.mark.skipif( SKIP_PRIVATE or SKIP_CODECS or not imagecodecs.JPEG.available, reason=REASON, ) def test_read_svs_cmu1(): """Test read Aperio SVS slide, JPEG and LZW.""" fname = private_file('AperioSVS/CMU-1.svs') with TiffFile(fname) as tif: assert tif.is_svs assert not tif.is_scanimage assert len(tif.pages) == 6 assert len(tif.series) == 4 for page in tif.pages: svs_description_metadata(page.description) # first page page = tif.pages.first assert page.is_svs assert not page.is_jfif assert page.is_subsampled assert page.photometric == PHOTOMETRIC.RGB assert page.is_tiled assert page.compression == COMPRESSION.JPEG assert page.shape == (32914, 46000, 3) metadata = svs_description_metadata(page.description) assert metadata['Header'].startswith('Aperio Image Library') assert metadata['Originalheight'] == 33014 # page 4 page = tif.pages[4] assert page.is_svs assert page.is_reduced assert page.photometric == PHOTOMETRIC.RGB assert page.compression == COMPRESSION.LZW assert page.shape == (463, 387, 3) metadata = svs_description_metadata(page.description) assert 'label 387x463' in metadata['Header'] assert_aszarr_method(page) assert__str__(tif) @pytest.mark.skipif( SKIP_PRIVATE or SKIP_CODECS or not imagecodecs.JPEG2K.available, reason=REASON, ) def test_read_svs_jp2k_33003_1(): """Test read Aperio SVS slide, JP2000 and LZW.""" fname = private_file('AperioSVS/JP2K-33003-1.svs') with TiffFile(fname) as tif: assert tif.is_svs assert not tif.is_scanimage assert len(tif.pages) == 6 assert len(tif.series) == 4 for page in tif.pages: svs_description_metadata(page.description) # first page page = tif.pages.first assert page.is_svs assert not page.is_subsampled assert page.photometric == PHOTOMETRIC.RGB assert page.is_tiled assert page.compression == COMPRESSION.APERIO_JP2000_YCBC assert page.shape == (17497, 15374, 3) metadata = svs_description_metadata(page.description) assert metadata['Header'].startswith('Aperio Image Library') assert metadata['Originalheight'] == 17597 # page 4 page = tif.pages[4] assert page.is_svs assert page.is_reduced assert page.photometric == PHOTOMETRIC.RGB assert page.compression == COMPRESSION.LZW assert page.shape == (422, 415, 3) metadata = svs_description_metadata(page.description) assert 'label 415x422' in metadata['Header'] assert_aszarr_method(page) assert__str__(tif) @pytest.mark.skipif( SKIP_PRIVATE or SKIP_CODECS or not imagecodecs.JPEG.available, reason=REASON, ) def test_read_bif(caplog): """Test read Ventana BIF slide.""" fname = private_file('VentanaBIF/OS-2.bif') with TiffFile(fname) as tif: assert tif.is_bif assert len(tif.pages) == 12 assert len(tif.series) == 3 # first page page = tif.pages.first assert page.is_bif assert page.photometric == PHOTOMETRIC.YCBCR assert page.is_tiled assert page.compression == COMPRESSION.JPEG assert page.shape == (3008, 1008, 3) series = tif.series assert 'not stiched' in caplog.text # baseline series = tif.series[0] assert series.kind == 'bif' assert series.name == 'Baseline' assert len(series.levels) == 10 assert series.shape == (82960, 128000, 3) assert series.dtype == numpy.uint8 # level 0 page = series.pages[0] assert page.is_bif assert page.is_tiled assert page.photometric == PHOTOMETRIC.YCBCR assert page.compression == COMPRESSION.JPEG assert page.shape == (82960, 128000, 3) assert page.description == 'level=0 mag=40 quality=90' # level 5 page = series.levels[5].pages[0] assert not page.is_bif assert page.is_tiled assert page.photometric == PHOTOMETRIC.YCBCR assert page.compression == COMPRESSION.JPEG assert page.shape == (2600, 4000, 3) assert page.description == 'level=5 mag=1.25 quality=90' assert_aszarr_method(page) assert__str__(tif) @pytest.mark.skipif( SKIP_PRIVATE or SKIP_LARGE or SKIP_CODECS or not imagecodecs.JPEG.available, reason=REASON, ) def test_read_scn_collection(): """Test read Leica SCN slide, JPEG.""" # collection of 43 CZYX images # https://forum.image.sc/t/43585 fname = private_file( 'LeicaSCN/19-3-12_b5992c2e-5b6e-46f2-bf9b-d5872bdebdc1.SCN' ) with TiffFile(fname) as tif: assert tif.is_scn assert tif.is_bigtiff assert len(tif.pages) == 5358 assert len(tif.series) == 46 # first page page = tif.pages.first assert page.is_scn assert page.is_tiled assert page.photometric == PHOTOMETRIC.YCBCR assert page.compression == COMPRESSION.JPEG assert page.shape == (12990, 5741, 3) metadata = tif.scn_metadata assert metadata.startswith('') for series in tif.series[2:]: assert series.kind == 'scn' assert series.axes == 'CZYX' assert series.shape[:2] == (4, 8) assert len(series.levels) in {2, 3, 4, 5} assert len(series.pages) == 32 # third series series = tif.series[2] assert series.shape == (4, 8, 946, 993) assert_aszarr_method(series) assert__str__(tif) @pytest.mark.skipif(SKIP_PRIVATE, reason=REASON) def test_read_scanimage_metadata(): """Test read ScanImage metadata.""" fname = private_file('ScanImage/TS_UnitTestImage_BigTIFF.tif') with open(fname, 'rb') as fh: frame_data, roi_data, version = read_scanimage_metadata(fh) assert version == 3 assert frame_data['SI.hChannels.channelType'] == ['stripe', 'stripe'] assert roi_data['RoiGroups']['imagingRoiGroup']['ver'] == 1 @pytest.mark.skipif(SKIP_PRIVATE, reason=REASON) def test_read_scanimage_2021(): """Test read ScanImage metadata.""" # https://github.com/cgohlke/tifffile/issues/46 fname = private_file('ScanImage/ScanImage2021_3frames.tif') with open(fname, 'rb') as fh: frame_data, roi_data, version = read_scanimage_metadata(fh) assert frame_data['SI.hChannels.channelType'] == [ 'stripe', 'stripe', 'stripe', 'stripe', ] assert version == 4 assert roi_data['RoiGroups']['imagingRoiGroup']['ver'] == 1 with TiffFile(fname) as tif: assert tif.is_scanimage assert len(tif.pages) == 3 assert len(tif.series) == 1 assert tif.series[0].shape == (3, 256, 256) assert tif.series[0].axes == 'TYX' # non-varying scanimage_metadata assert tif.scanimage_metadata['version'] == 4 assert 'FrameData' in tif.scanimage_metadata assert 'RoiGroups' in tif.scanimage_metadata # assert page properties page = tif.pages.first assert page.is_scanimage assert page.is_contiguous assert page.compression == COMPRESSION.NONE assert page.imagewidth == 256 assert page.imagelength == 256 assert page.bitspersample == 16 assert page.samplesperpixel == 1 # description tags metadata = scanimage_description_metadata(page.description) assert metadata['epoch'] == [2021, 3, 1, 17, 31, 28.047] assert_aszarr_method(tif) assert__str__(tif) @pytest.mark.skipif(SKIP_PRIVATE, reason=REASON) def test_read_scanimage_no_framedata(): """Test read ScanImage no FrameData.""" fname = private_file('ScanImage/PSF001_ScanImage36.tif') with TiffFile(fname) as tif: assert tif.is_scanimage assert len(tif.pages) == 100 assert len(tif.series) == 1 # no non-tiff scanimage_metadata assert not tif.scanimage_metadata # assert page properties page = tif.pages.first assert page.is_scanimage assert page.is_contiguous assert page.compression == COMPRESSION.NONE assert page.imagewidth == 256 assert page.imagelength == 256 assert page.bitspersample == 16 assert page.samplesperpixel == 1 # description tags metadata = scanimage_description_metadata(page.description) assert metadata['state.software.version'] == 3.6 assert_aszarr_method(tif) assert__str__(tif) @pytest.mark.skipif(SKIP_PRIVATE or SKIP_LARGE, reason=REASON) def test_read_scanimage_2gb(): """Test read ScanImage non-BigTIFF > 2 GB. https://github.com/MouseLand/suite2p/issues/149 """ fname = private_file('ScanImage/M161209TH_01__001.tif') with TiffFile(fname) as tif: assert tif.is_scanimage assert len(tif.pages) == 5980 assert len(tif.series) == 1 assert tif.series[0].kind == 'scanimage' # no non-tiff scanimage_metadata assert 'version' not in tif.scanimage_metadata assert 'FrameData' not in tif.scanimage_metadata assert 'RoiGroups' not in tif.scanimage_metadata # assert page properties page = tif.pages.first assert isinstance(page, TiffPage) assert not page.is_frame assert not page.is_virtual assert not page.is_subifd assert page.is_scanimage assert page.is_contiguous assert page.compression == COMPRESSION.NONE assert page.imagewidth == 512 assert page.imagelength == 512 assert page.bitspersample == 16 assert page.samplesperpixel == 1 # using virtual frames frame = tif.pages[-1] assert isinstance(frame, TiffFrame) assert frame.is_frame assert frame.is_virtual assert not frame.is_subifd assert frame.offset <= 0 assert frame.index == 5979 assert frame.dataoffsets[0] == 3163182856 assert frame.databytecounts[0] == 8192 # 524288 assert len(frame.dataoffsets) == 64 assert len(frame.databytecounts) == 64 # description tags metadata = scanimage_description_metadata(page.description) assert metadata['scanimage.SI5.VERSION_MAJOR'] == 5 # assert data data = tif.asarray() assert data[5979, 256, 256] == 71 data = frame.asarray() assert data[256, 256] == 71 assert_aszarr_method(tif) assert__str__(tif) @pytest.mark.skipif(SKIP_PRIVATE, reason=REASON) def test_read_scanimage_bigtiff(): """Test read ScanImage BigTIFF.""" # https://github.com/cgohlke/tifffile/issues/29 fname = private_file('ScanImage/area1__00001.tif') with TiffFile(fname) as tif: assert tif.is_scanimage assert len(tif.pages) == 162 assert len(tif.series) == 1 assert tif.series[0].kind == 'scanimage' # assert page properties page = tif.pages.first assert page.is_scanimage assert page.is_contiguous assert page.compression == COMPRESSION.NONE assert page.imagewidth == 512 assert page.imagelength == 512 assert page.bitspersample == 16 assert page.samplesperpixel == 1 # metadata in description, software, artist tags metadata = scanimage_description_metadata(page.description) assert metadata['frameNumbers'] == 1 metadata = scanimage_description_metadata(page.tags['Software'].value) assert metadata['SI.TIFF_FORMAT_VERSION'] == 3 metadata = scanimage_artist_metadata(page.tags['Artist'].value) assert metadata['RoiGroups']['imagingRoiGroup']['ver'] == 1 metadata = tif.scanimage_metadata assert metadata['version'] == 3 assert metadata['FrameData']['SI.TIFF_FORMAT_VERSION'] == 3 assert metadata['RoiGroups']['imagingRoiGroup']['ver'] == 1 assert 'Description' not in metadata # assert page offsets are correct assert tif.pages[-1].offset == 84527590 # not 84526526 (calculated) # read image data assert_aszarr_method(tif) assert__str__(tif) @pytest.mark.skipif(SKIP_PUBLIC, reason=REASON) def test_read_ome_single_channel(): """Test read OME image.""" # 2D (single image) # OME-TIFF reference images from # https://www.openmicroscopy.org/site/support/ome-model/ome-tiff fname = public_file('OME/bioformats-artificial/single-channel.ome.tif') with TiffFile(fname) as tif: assert tif.is_ome assert tif.byteorder == '>' assert len(tif.pages) == 1 assert len(tif.series) == 1 # assert page properties page = tif.pages.first assert page.is_contiguous assert page.tags['Software'].value[:15] == 'OME Bio-Formats' assert page.compression == COMPRESSION.NONE assert page.imagewidth == 439 assert page.imagelength == 167 assert page.bitspersample == 8 assert page.samplesperpixel == 1 # assert series properties series = tif.series[0] assert not series.is_multifile assert series.dtype == numpy.int8 assert series.shape == (167, 439) assert series.axes == 'YX' assert series.kind == 'ome' assert series.get_shape(False) == (1, 1, 1, 167, 439, 1) assert series.get_axes(False) == 'TCZYXS' # assert data data = tif.asarray() assert isinstance(data, numpy.ndarray) assert data.shape == (167, 439) assert data.dtype == numpy.int8 assert data[158, 428] == 91 assert_aszarr_method(tif, data) assert__str__(tif) @pytest.mark.skipif(SKIP_PUBLIC, reason=REASON) def test_read_ome_multi_channel(): """Test read OME multi channel image.""" # 2D (3 channels) fname = public_file('OME/bioformats-artificial/multi-channel.ome.tiff') with TiffFile(fname) as tif: assert tif.is_ome assert tif.byteorder == '>' assert len(tif.pages) == 3 assert len(tif.series) == 1 # assert page properties page = tif.pages.first assert page.is_contiguous assert page.tags['Software'].value[:15] == 'OME Bio-Formats' assert page.compression == COMPRESSION.NONE assert page.imagewidth == 439 assert page.imagelength == 167 assert page.bitspersample == 8 assert page.samplesperpixel == 1 # assert series properties series = tif.series[0] assert series.shape == (3, 167, 439) assert series.dtype == numpy.int8 assert series.axes == 'CYX' assert series.kind == 'ome' assert series.get_shape(False) == (1, 3, 1, 167, 439, 1) assert series.get_axes(False) == 'TCZYXS' assert not series.is_multifile # assert data data = tif.asarray() assert isinstance(data, numpy.ndarray) assert data.shape == (3, 167, 439) assert data.dtype == numpy.int8 assert data[2, 158, 428] == 91 assert_aszarr_method(tif, data) # don't squeeze data = tif.asarray(squeeze=False) assert_aszarr_method(tif, data, squeeze=False) assert__str__(tif) @pytest.mark.skipif(SKIP_PUBLIC, reason=REASON) def test_read_ome_z_series(): """Test read OME volume.""" # 3D (5 focal planes) fname = public_file('OME/bioformats-artificial/z-series.ome.tif') with TiffFile(fname) as tif: assert tif.is_ome assert tif.byteorder == '>' assert len(tif.pages) == 5 assert len(tif.series) == 1 # assert page properties page = tif.pages.first assert page.is_contiguous assert page.tags['Software'].value[:15] == 'OME Bio-Formats' assert page.compression == COMPRESSION.NONE assert page.imagewidth == 439 assert page.imagelength == 167 assert page.bitspersample == 8 assert page.samplesperpixel == 1 # assert series properties series = tif.series[0] assert series.shape == (5, 167, 439) assert series.dtype == numpy.int8 assert series.axes == 'ZYX' assert series.kind == 'ome' assert series.get_shape(False) == (1, 1, 5, 167, 439, 1) assert series.get_axes(False) == 'TCZYXS' assert not series.is_multifile # assert data data = tif.asarray() assert isinstance(data, numpy.ndarray) assert data.shape == (5, 167, 439) assert data.dtype == numpy.int8 assert data[4, 158, 428] == 91 assert_aszarr_method(tif, data) assert__str__(tif) @pytest.mark.skipif(SKIP_PUBLIC, reason=REASON) def test_read_ome_multi_channel_z_series(): """Test read OME multi-channel volume.""" # 3D (5 focal planes, 3 channels) fname = public_file( 'OME/bioformats-artificial/multi-channel-z-series.ome.tiff' ) with TiffFile(fname) as tif: assert tif.is_ome assert tif.byteorder == '>' assert len(tif.pages) == 15 assert len(tif.series) == 1 # assert page properties page = tif.pages.first assert page.is_contiguous assert page.tags['Software'].value[:15] == 'OME Bio-Formats' assert page.compression == COMPRESSION.NONE assert page.imagewidth == 439 assert page.imagelength == 167 assert page.bitspersample == 8 assert page.samplesperpixel == 1 # assert series properties series = tif.series[0] assert series.shape == (3, 5, 167, 439) assert series.dtype == numpy.int8 assert series.axes == 'CZYX' assert series.kind == 'ome' assert series.get_shape(False) == (1, 3, 5, 167, 439, 1) assert series.get_axes(False) == 'TCZYXS' assert not series.is_multifile # assert data data = tif.asarray() assert isinstance(data, numpy.ndarray) assert data.shape == (3, 5, 167, 439) assert data.dtype == numpy.int8 assert data[2, 4, 158, 428] == 91 assert_aszarr_method(tif, data) assert__str__(tif) @pytest.mark.skipif(SKIP_PUBLIC, reason=REASON) def test_read_ome_time_series(): """Test read OME time-series of images.""" # 3D (7 time points) fname = public_file('OME/bioformats-artificial/time-series.ome.tiff') with TiffFile(fname) as tif: assert tif.is_ome assert tif.byteorder == '>' assert len(tif.pages) == 7 assert len(tif.series) == 1 # assert page properties page = tif.pages.first assert page.is_contiguous assert page.tags['Software'].value[:15] == 'OME Bio-Formats' assert page.compression == COMPRESSION.NONE assert page.imagewidth == 439 assert page.imagelength == 167 assert page.bitspersample == 8 assert page.samplesperpixel == 1 # assert series properties series = tif.series[0] assert series.shape == (7, 167, 439) assert series.dtype == numpy.int8 assert series.axes == 'TYX' assert series.kind == 'ome' assert series.get_shape(False) == (7, 1, 1, 167, 439, 1) assert series.get_axes(False) == 'TCZYXS' assert not series.is_multifile # assert data data = tif.asarray() assert isinstance(data, numpy.ndarray) assert data.shape == (7, 167, 439) assert data.dtype == numpy.int8 assert data[6, 158, 428] == 91 assert_aszarr_method(tif, data) assert__str__(tif) @pytest.mark.skipif(SKIP_PUBLIC, reason=REASON) def test_read_ome_multi_channel_time_series(): """Test read OME time-series of multi-channel images.""" # 3D (7 time points, 3 channels) fname = public_file( 'OME/bioformats-artificial/multi-channel-time-series.ome.tiff' ) with TiffFile(fname) as tif: assert tif.is_ome assert tif.byteorder == '>' assert len(tif.pages) == 21 assert len(tif.series) == 1 # assert page properties page = tif.pages.first assert page.is_contiguous assert page.tags['Software'].value[:15] == 'OME Bio-Formats' assert page.compression == COMPRESSION.NONE assert page.imagewidth == 439 assert page.imagelength == 167 assert page.bitspersample == 8 assert page.samplesperpixel == 1 # assert series properties series = tif.series[0] assert series.shape == (7, 3, 167, 439) assert series.dtype == numpy.int8 assert series.axes == 'TCYX' assert series.kind == 'ome' assert series.get_shape(False) == (7, 3, 1, 167, 439, 1) assert series.get_axes(False) == 'TCZYXS' assert not series.is_multifile # assert data data = tif.asarray() assert isinstance(data, numpy.ndarray) assert data.shape == (7, 3, 167, 439) assert data.dtype == numpy.int8 assert data[6, 2, 158, 428] == 91 assert_aszarr_method(tif, data) # don't squeeze data = tif.asarray(squeeze=False) assert_aszarr_method(tif, data, squeeze=False) assert__str__(tif) @pytest.mark.skipif(SKIP_PUBLIC, reason=REASON) def test_read_ome_4d_series(): """Test read OME time-series of volumes.""" # 4D (7 time points, 5 focal planes) fname = public_file('OME/bioformats-artificial/4D-series.ome.tiff') with TiffFile(fname) as tif: assert tif.is_ome assert tif.byteorder == '>' assert len(tif.pages) == 35 assert len(tif.series) == 1 # assert page properties page = tif.pages.first assert page.is_contiguous assert page.tags['Software'].value[:15] == 'OME Bio-Formats' assert page.compression == COMPRESSION.NONE assert page.imagewidth == 439 assert page.imagelength == 167 assert page.bitspersample == 8 assert page.samplesperpixel == 1 # assert series properties series = tif.series[0] assert series.shape == (7, 5, 167, 439) assert series.dtype == numpy.int8 assert series.axes == 'TZYX' assert series.kind == 'ome' assert not series.is_multifile # assert data data = tif.asarray() assert isinstance(data, numpy.ndarray) assert data.shape == (7, 5, 167, 439) assert data.dtype == numpy.int8 assert data[6, 4, 158, 428] == 91 assert_aszarr_method(tif, data) assert__str__(tif) @pytest.mark.skipif(SKIP_PUBLIC, reason=REASON) def test_read_ome_multi_channel_4d_series(): """Test read OME time-series of multi-channel volumes.""" # 4D (7 time points, 5 focal planes, 3 channels) fname = public_file( 'OME/bioformats-artificial/multi-channel-4D-series.ome.tiff' ) with TiffFile(fname) as tif: assert tif.is_ome assert tif.byteorder == '>' assert len(tif.pages) == 105 assert len(tif.series) == 1 # assert page properties page = tif.pages.first assert page.is_contiguous assert page.tags['Software'].value[:15] == 'OME Bio-Formats' assert page.compression == COMPRESSION.NONE assert page.imagewidth == 439 assert page.imagelength == 167 assert page.bitspersample == 8 assert page.samplesperpixel == 1 # assert series properties series = tif.series[0] assert series.shape == (7, 3, 5, 167, 439) assert series.dtype == numpy.int8 assert series.axes == 'TCZYX' assert series.kind == 'ome' assert series.get_shape(False) == (7, 3, 5, 167, 439, 1) assert series.get_axes(False) == 'TCZYXS' assert not series.is_multifile # assert data data = tif.asarray() assert isinstance(data, numpy.ndarray) assert data.shape == (7, 3, 5, 167, 439) assert data.dtype == numpy.int8 assert data[6, 0, 4, 158, 428] == 91 assert_aszarr_method(tif, data) # don't squeeze data = tif.asarray(squeeze=False) assert_aszarr_method(tif, data, squeeze=False) assert__str__(tif) @pytest.mark.skipif(SKIP_PUBLIC, reason=REASON) def test_read_ome_modulo_flim(): """Test read OME modulo FLIM.""" fname = public_file('OME/modulo/FLIM-ModuloAlongC.ome.tiff') with TiffFile(fname) as tif: assert tif.is_ome assert tif.byteorder == '>' assert len(tif.pages) == 16 assert len(tif.series) == 1 # assert page properties page = tif.pages.first assert page.is_contiguous assert page.tags['Software'].value[:15] == 'OME Bio-Formats' assert page.compression == COMPRESSION.NONE assert page.imagewidth == 180 assert page.imagelength == 150 assert page.bitspersample == 8 assert page.samplesperpixel == 1 # assert series properties series = tif.series[0] assert series.shape == (2, 8, 150, 180) assert series.dtype == numpy.int8 assert series.axes == 'CHYX' assert series.kind == 'ome' assert series.get_shape(False) == (1, 2, 8, 1, 150, 180, 1) assert series.get_axes(False) == 'TCHZYXS' assert not series.is_multifile # assert data data = tif.asarray() assert isinstance(data, numpy.ndarray) assert data.shape == (2, 8, 150, 180) assert data.dtype == numpy.int8 assert data[1, 7, 143, 172] == 92 assert_aszarr_method(tif, data) # don't squeeze data = tif.asarray(squeeze=False) assert_aszarr_method(tif, data, squeeze=False) assert__str__(tif) @pytest.mark.skipif(SKIP_PUBLIC, reason=REASON) def test_read_ome_modulo_flim_tcspc(): """Test read OME modulo FLIM TSCPC.""" # Two channels each recorded at two timepoints and eight histogram bins fname = public_file('OME/modulo/FLIM-ModuloAlongT-TSCPC.ome.tiff') with TiffFile(fname) as tif: assert tif.is_ome assert tif.byteorder == '<' assert len(tif.pages) == 32 assert len(tif.series) == 1 # assert page properties page = tif.pages.first assert page.is_contiguous assert page.tags['Software'].value[:15] == 'OME Bio-Formats' assert page.compression == COMPRESSION.NONE assert page.imagewidth == 180 assert page.imagelength == 200 assert page.bitspersample == 8 assert page.samplesperpixel == 1 # assert series properties series = tif.series[0] assert series.shape == (2, 8, 2, 200, 180) assert series.dtype == numpy.int8 assert series.axes == 'THCYX' assert series.kind == 'ome' assert not series.is_multifile # assert data data = tif.asarray() assert isinstance(data, numpy.ndarray) assert data.shape == (2, 8, 2, 200, 180) assert data.dtype == numpy.int8 assert data[1, 7, 1, 190, 161] == 92 assert_aszarr_method(tif, data) assert__str__(tif) @pytest.mark.skipif(SKIP_PUBLIC, reason=REASON) def test_read_ome_modulo_spim(): """Test read OME modulo SPIM.""" # 2x2 tile of planes each recorded at 4 angles fname = public_file('OME/modulo/SPIM-ModuloAlongZ.ome.tiff') with TiffFile(fname) as tif: assert tif.is_ome assert tif.byteorder == '<' assert len(tif.pages) == 192 assert len(tif.series) == 1 # assert page properties page = tif.pages.first assert page.is_contiguous assert page.tags['Software'].value == 'OME Bio-Formats 5.2.0-SNAPSHOT' assert page.compression == COMPRESSION.NONE assert page.imagewidth == 160 assert page.imagelength == 220 assert page.bitspersample == 8 assert page.samplesperpixel == 1 # assert series properties series = tif.series[0] assert series.shape == (3, 4, 2, 4, 2, 220, 160) assert series.dtype == numpy.uint8 assert series.axes == 'TRZACYX' assert series.kind == 'ome' assert not series.is_multifile # assert data data = tif.asarray() assert isinstance(data, numpy.ndarray) assert data.shape == (3, 4, 2, 4, 2, 220, 160) assert data.dtype == numpy.uint8 assert data[2, 3, 1, 3, 1, 210, 151] == 92 assert_aszarr_method(tif, data) assert__str__(tif) @pytest.mark.skipif(SKIP_PUBLIC, reason=REASON) def test_read_ome_modulo_lambda(): """Test read OME modulo LAMBDA.""" # Excitation of 5 wavelength [big-lambda] each recorded at 10 emission # wavelength ranges [lambda]. fname = public_file('OME/modulo/LAMBDA-ModuloAlongZ-ModuloAlongT.ome.tiff') with TiffFile(fname) as tif: assert tif.is_ome assert tif.byteorder == '<' assert len(tif.pages) == 50 assert len(tif.series) == 1 # assert page properties page = tif.pages.first assert page.is_contiguous assert page.tags['Software'].value == 'OME Bio-Formats 5.2.0-SNAPSHOT' assert page.compression == COMPRESSION.NONE assert page.imagewidth == 200 assert page.imagelength == 200 assert page.bitspersample == 8 assert page.samplesperpixel == 1 # assert series properties series = tif.series[0] assert series.shape == (10, 5, 200, 200) assert series.dtype == numpy.uint8 assert series.axes == 'EPYX' assert series.kind == 'ome' assert not series.is_multifile # assert data data = tif.asarray() assert isinstance(data, numpy.ndarray) assert data.shape == (10, 5, 200, 200) assert data.dtype == numpy.uint8 assert data[9, 4, 190, 192] == 92 assert_aszarr_method(tif, data) assert__str__(tif) @pytest.mark.skipif(SKIP_PUBLIC, reason=REASON) def test_read_ome_multiimage_pixels(): """Test read OME with three image series.""" fname = public_file('OME/bioformats-artificial/multi-image-pixels.ome.tif') with TiffFile(fname) as tif: assert tif.is_ome assert tif.byteorder == '>' assert len(tif.pages) == 86 assert len(tif.series) == 3 # assert page properties for i, axes, shape in ( (0, 'CTYX', (2, 7, 555, 431)), (1, 'TZYX', (6, 2, 461, 348)), (2, 'TZCYX', (4, 5, 3, 239, 517)), ): series = tif.series[i] assert series.kind == 'ome' page = series.pages[0] assert page.is_contiguous assert page.tags['Software'].value == 'LOCI Bio-Formats' assert page.compression == COMPRESSION.NONE assert page.imagewidth == shape[-1] assert page.imagelength == shape[-2] assert page.bitspersample == 8 assert page.samplesperpixel == 1 # assert series properties assert series.shape == shape assert series.dtype == numpy.uint8 assert series.axes == axes assert not series.is_multifile # assert data data = tif.asarray(series=i) assert isinstance(data, numpy.ndarray) assert data.shape == shape assert data.dtype == numpy.uint8 assert_aszarr_method(series, data) assert__str__(tif) @pytest.mark.skipif(SKIP_PRIVATE, reason=REASON) def test_read_ome_multiimage_nouuid(): """Test read single-file, multi-image OME without UUID.""" fname = private_file( 'OMETIFF.jl/singles/181003_multi_pos_time_course_1_MMStack.ome.tif' ) with TiffFile(fname, is_mmstack=False) as tif: assert tif.is_ome assert tif.byteorder == '<' assert len(tif.pages) == 20 assert len(tif.series) == 2 # assert page properties for i in (0, 1): series = tif.series[i] page = series.pages[0] assert page.is_imagej == (i == 0) assert page.is_ome == (i == 0) assert page.is_micromanager assert page.is_contiguous assert page.compression == COMPRESSION.NONE assert page.imagewidth == 256 assert page.imagelength == 256 assert page.bitspersample == 16 assert page.samplesperpixel == 1 # assert series properties assert series.shape == (10, 256, 256) assert series.dtype == numpy.uint16 assert series.axes == 'TYX' assert series.kind == 'ome' assert not series.is_multifile # assert data data = tif.asarray(series=i) assert_array_equal( data, imread(fname, series=i, is_micromanager=False) ) assert isinstance(data, numpy.ndarray) assert data.shape == (10, 256, 256) assert data.dtype == numpy.uint16 assert data[5, 128, 128] == (18661, 16235)[i] assert_aszarr_method(series, data) assert__str__(tif) @pytest.mark.skipif(SKIP_PRIVATE, reason=REASON) def test_read_ome_zen_2chzt(): """Test read OME time-series of two-channel volumes by ZEN 2011.""" fname = private_file('OME/zen_2chzt.ome.tiff') with TiffFile(fname) as tif: assert tif.is_ome assert tif.byteorder == '<' assert len(tif.pages) == 798 assert len(tif.series) == 1 # assert page properties page = tif.pages.first assert page.is_contiguous assert page.tags['Software'].value == 'ZEN 2011 (blue edition)' assert page.compression == COMPRESSION.NONE assert page.imagewidth == 400 assert page.imagelength == 300 assert page.bitspersample == 8 assert page.samplesperpixel == 1 # assert series properties series = tif.series[0] assert series.shape == (2, 19, 21, 300, 400) assert series.dtype == numpy.uint8 assert series.axes == 'CTZYX' assert series.kind == 'ome' assert not series.is_multifile # assert data data = tif.asarray() assert isinstance(data, numpy.ndarray) assert data.shape == (2, 19, 21, 300, 400) assert data.dtype == numpy.uint8 assert data[1, 10, 10, 100, 245] == 78 assert_aszarr_method(tif, data) assert__str__(tif, 0) @pytest.mark.skipif(SKIP_PUBLIC or SKIP_LARGE, reason=REASON) def test_read_ome_multifile(): """Test read OME CTZYX series in 86 files.""" # (2, 43, 10, 512, 512) CTZYX uint8 in 86 files, 10 pages each fname = public_file('OME/tubhiswt-4D/tubhiswt_C0_TP10.ome.tif') with TiffFile(fname) as tif: assert tif.is_ome assert tif.byteorder == '<' assert len(tif.pages) == 10 assert len(tif.series) == 1 # assert page properties page = tif.pages.first assert page.is_contiguous assert page.tags['Software'].value[:15] == 'OME Bio-Formats' assert page.compression == COMPRESSION.NONE assert page.imagewidth == 512 assert page.imagelength == 512 assert page.bitspersample == 8 assert page.samplesperpixel == 1 # assert series properties series = tif.series[0] assert series.shape == (2, 43, 10, 512, 512) assert series.dtype == numpy.uint8 assert series.axes == 'CTZYX' assert series.kind == 'ome' assert series.is_multifile # assert other files are closed after TiffFile._series_ome for page in tif.series[0].pages: assert bool(page.parent.filehandle._fh) == (page.parent == tif) # assert data data = tif.asarray(out='memmap') assert isinstance(data, numpy.memmap) assert data.shape == (2, 43, 10, 512, 512) assert data.dtype == numpy.uint8 assert data[1, 42, 9, 426, 272] == 123 # assert other files are still closed after TiffFile.asarray for page in tif.series[0].pages: assert bool(page.parent.filehandle._fh) == (page.parent == tif) assert tif.filehandle._fh assert__str__(tif) # test aszarr assert_aszarr_method(tif, data) assert_aszarr_method(tif, data, chunkmode='page') del data # assert other files are still closed after ZarrStore.close for page in tif.series[0].pages: assert bool(page.parent.filehandle._fh) == (page.parent == tif) # assert all files stay open # with TiffFile(fname) as tif: # for page in tif.series[0].pages: # self.assertTrue(page.parent.filehandle._fh) # data = tif.asarray(out='memmap') # for page in tif.series[0].pages: # self.assertTrue(page.parent.filehandle._fh) @pytest.mark.skipif(SKIP_PRIVATE or SKIP_LARGE, reason=REASON) def test_read_ome_multifile_missing(caplog): """Test read OME referencing missing files.""" # (2, 43, 10, 512, 512) CTZYX uint8, 85 files missing fname = private_file('OME/tubhiswt_C0_TP34.ome.tif') with TiffFile(fname) as tif: assert tif.is_ome assert tif.byteorder == '<' assert len(tif.pages) == 10 assert len(tif.series) == 1 assert 'failed to read' in caplog.text # assert page properties page = tif.pages.first TiffPage._str(page, 4) assert page.is_contiguous assert page.tags['Software'].value[:15] == 'OME Bio-Formats' assert page.compression == COMPRESSION.NONE assert page.imagewidth == 512 assert page.imagelength == 512 assert page.bitspersample == 8 assert page.samplesperpixel == 1 page = tif.pages[-1] TiffPage._str(page, 4) assert page.shape == (512, 512) # assert series properties series = tif.series[0] assert series.shape == (2, 43, 10, 512, 512) assert series.dtype == numpy.uint8 assert series.axes == 'CTZYX' assert series.kind == 'ome' assert series.is_multifile # assert data data = tif.asarray(out='memmap') assert isinstance(data, numpy.memmap) assert data.shape == (2, 43, 10, 512, 512) assert data.dtype == numpy.uint8 assert data[0, 34, 4, 303, 206] == 82 assert data[1, 25, 2, 425, 272] == 196 assert_aszarr_method(tif, data) assert_aszarr_method(tif, data, chunkmode='page') del data assert__str__(tif) @pytest.mark.skipif(SKIP_PRIVATE or SKIP_LARGE, reason=REASON) def test_read_ome_multifile_binaryio(caplog): """Test read OME multifile series with BinaryIO.""" # (2, 43, 10, 512, 512) CTZYX uint8, 85 files missing fname = private_file('OME/tubhiswt_C0_TP34.ome.tif') with open(fname, 'rb') as fh: bytesio = BytesIO(fh.read()) with TiffFile(bytesio) as tif: assert tif.is_ome assert tif.byteorder == '<' assert len(tif.pages) == 10 assert len(tif.series) == 1 assert 'failed to read' in caplog.text # assert page properties page = tif.pages.first TiffPage._str(page, 4) assert page.is_contiguous assert page.tags['Software'].value[:15] == 'OME Bio-Formats' assert page.compression == COMPRESSION.NONE assert page.imagewidth == 512 assert page.imagelength == 512 assert page.bitspersample == 8 assert page.samplesperpixel == 1 page = tif.pages[-1] TiffPage._str(page, 4) assert page.shape == (512, 512) # assert series properties series = tif.series[0] assert series.shape == (2, 43, 10, 512, 512) assert series.dtype == numpy.uint8 assert series.axes == 'CTZYX' assert series.kind == 'ome' assert not series.is_multifile # assert data data = tif.asarray() assert data.shape == (2, 43, 10, 512, 512) assert data.dtype == numpy.uint8 assert data[0, 34, 4, 303, 206] == 82 assert data[1, 25, 2, 425, 272] == 0 assert_aszarr_method(tif, data) del data assert__str__(tif) @pytest.mark.skipif(SKIP_PRIVATE, reason=REASON) def test_read_ome_companion(caplog): """Test read multifile OME-TIFF using companion file.""" fname = private_file('OME/companion/multifile-Z2.ome.tiff') with TiffFile(fname) as tif: assert tif.is_ome with caplog.at_level(logging.DEBUG): assert tif.series[0].kind == 'generic' assert 'OME series is BinaryOnly' in caplog.text with open( private_file('OME/companion/multifile.companion.ome'), encoding='utf-8' ) as fh: omexml = fh.read() with TiffFile(fname, omexml=omexml) as tif: assert tif.is_ome series = tif.series[0] assert series.kind == 'ome' image = series.asarray() fname = private_file('OME/companion/multifile-Z1.ome.tiff') image2 = imread(fname) assert_array_equal(image, image2) @pytest.mark.skipif(SKIP_PRIVATE, reason=REASON) def test_read_ome_rgb(): """Test read OME RGB image.""" # https://github.com/openmicroscopy/bioformats/pull/1986 fname = private_file('OME/test_rgb.ome.tif') with TiffFile(fname) as tif: assert tif.is_ome assert tif.byteorder == '<' assert len(tif.pages) == 1 assert len(tif.series) == 1 # assert page properties page = tif.pages.first assert page.is_contiguous assert page.tags['Software'].value[:15] == 'OME Bio-Formats' assert page.compression == COMPRESSION.NONE assert page.imagewidth == 1280 assert page.imagelength == 720 assert page.bitspersample == 8 assert page.samplesperpixel == 3 # assert series properties series = tif.series[0] assert series.shape == (3, 720, 1280) assert series.dtype == numpy.uint8 assert series.axes == 'SYX' assert series.kind == 'ome' assert series.dataoffset == 17524 assert not series.is_multifile # assert data data = tif.asarray() assert data.shape == (3, 720, 1280) assert data.dtype == numpy.uint8 assert data[1, 158, 428] == 253 assert_aszarr_method(tif, data) assert_aszarr_method(tif, data, chunkmode='page') assert__str__(tif) @pytest.mark.skipif(SKIP_PRIVATE or SKIP_CODECS, reason=REASON) def test_read_ome_samplesperpixel(): """Test read OME image stack with SamplesPerPixel>1.""" # Reported by Grzegorz Bokota on 2019.1.30 fname = private_file('OME/test_samplesperpixel.ome.tif') with TiffFile(fname) as tif: assert tif.is_ome assert tif.byteorder == '>' assert len(tif.pages) == 6 assert len(tif.series) == 1 # assert page properties page = tif.pages.first assert page.tags['Software'].value[:15] == 'OME Bio-Formats' assert page.compression == COMPRESSION.LZW assert page.imagewidth == 1024 assert page.imagelength == 1024 assert page.bitspersample == 8 assert page.samplesperpixel == 3 # assert series properties series = tif.series[0] assert series.shape == (6, 3, 1024, 1024) assert series.dtype == numpy.uint8 assert series.axes == 'ZSYX' assert series.kind == 'ome' assert not series.is_multifile # assert data data = tif.asarray() assert data.shape == (6, 3, 1024, 1024) assert data.dtype == numpy.uint8 assert tuple(data[5, :, 191, 449]) == (253, 0, 28) assert_aszarr_method(tif, data) assert_aszarr_method(tif, data, chunkmode='page') assert__str__(tif) @pytest.mark.skipif(SKIP_PRIVATE, reason=REASON) def test_read_ome_float_modulo_attributes(): """Test read OME with floating point modulo attributes.""" # reported by Start Berg. File by Lorenz Maier. fname = private_file('OME/float_modulo_attributes.ome.tiff') with TiffFile(fname) as tif: assert tif.is_ome assert tif.byteorder == '>' assert len(tif.pages) == 2 assert len(tif.series) == 1 # assert page properties page = tif.pages.first assert page.is_contiguous assert page.tags['Software'].value[:15] == 'OME Bio-Formats' assert page.compression == COMPRESSION.NONE assert page.imagewidth == 512 assert page.imagelength == 512 assert page.bitspersample == 16 assert page.samplesperpixel == 1 # assert series properties series = tif.series[0] assert series.shape == (2, 512, 512) assert series.dtype == numpy.uint16 assert series.axes == 'QYX' assert series.kind == 'ome' assert not series.is_multifile # assert data data = tif.asarray() assert isinstance(data, numpy.ndarray) assert data.shape == (2, 512, 512) assert data.dtype == numpy.uint16 assert data[1, 158, 428] == 51 assert_aszarr_method(tif, data) assert_aszarr_method(tif, data, chunkmode='page') assert__str__(tif) @pytest.mark.skipif(SKIP_PRIVATE, reason=REASON) def test_read_ome_cropped(caplog): """Test read bad OME by ImageJ cropping.""" # ImageJ produces invalid ome-xml when cropping # http://lists.openmicroscopy.org.uk/pipermail/ome-devel/2013-December # /002631.html # Reported by Hadrien Mary on Dec 11, 2013 fname = private_file('ome/cropped.ome.tif') with TiffFile(fname) as tif: assert tif.is_ome assert tif.byteorder == '<' assert len(tif.pages) == 100 assert len(tif.series) == 1 assert 'invalid TiffData index' in caplog.text # assert page properties page = tif.pages.first assert page.tags['Software'].value[:15] == 'OME Bio-Formats' assert page.imagewidth == 324 assert page.imagelength == 249 assert page.bitspersample == 16 # assert series properties series = tif.series[0] assert series.shape == (5, 10, 2, 249, 324) assert series.dtype == numpy.uint16 assert series.axes == 'TZCYX' assert series.kind == 'ome' assert not series.is_multifile # assert data data = tif.asarray() assert data.shape == (5, 10, 2, 249, 324) assert data.dtype == numpy.uint16 assert data[4, 9, 1, 175, 123] == 9605 assert_aszarr_method(tif, data) assert_aszarr_method(tif, data, chunkmode='page') del data assert__str__(tif) @pytest.mark.skipif(SKIP_PRIVATE or SKIP_CODECS or SKIP_LARGE, reason=REASON) def test_read_ome_corrupted_page(caplog): """Test read OME with corrupted but not referenced page.""" # https://forum.image.sc/t/qupath-0-2-0-not-able-to-open-ome-tiff/23821/3 fname = private_file('ome/2019_02_19__7760_s1.ome.tiff') with TiffFile(fname) as tif: assert tif.is_ome assert tif.is_bigtiff assert tif.byteorder == '<' assert len(tif.pages) == 5 assert len(tif.series) == 1 assert 'missing required tags' in caplog.text # assert page properties page = tif.pages.first assert page.imagewidth == 7506 assert page.imagelength == 7506 assert page.bitspersample == 16 # assert series properties series = tif.series[0] assert series.shape == (4, 7506, 7506) assert series.dtype == numpy.uint16 assert series.axes == 'CYX' assert series.kind == 'ome' assert not series.is_multifile # assert data data = tif.asarray() assert data.shape == (4, 7506, 7506) assert data.dtype == numpy.uint16 assert tuple(data[:, 2684, 2684]) == (496, 657, 7106, 469) assert_aszarr_method(tif, data) assert_aszarr_method(tif, data, chunkmode='page') del data assert__str__(tif) @pytest.mark.skipif(SKIP_PRIVATE, reason=REASON) def test_read_ome_nikon(): """Test read bad OME by Nikon.""" # OME-XML references only first image # received from E. Gratton fname = private_file('OME/Nikon-cell011.ome.tif') with TiffFile(fname) as tif: assert tif.is_ome assert tif.byteorder == '<' assert len(tif.pages) == 1000 assert len(tif.series) == 1 # assert 'index out of range' in caplog.text # assert page properties page = tif.pages.first assert page.photometric != PHOTOMETRIC.RGB assert page.imagewidth == 1982 assert page.imagelength == 1726 assert page.bitspersample == 16 assert page.is_contiguous assert ( page.tags['ImageLength'].offset - page.tags['ImageWidth'].offset == 20 ) assert page.tags['ImageWidth'].offset == 6856262146 assert page.tags['ImageWidth'].valueoffset == 6856262158 assert page.tags['ImageLength'].offset == 6856262166 assert page.tags['ImageLength'].valueoffset == 6856262178 assert page.tags['StripByteCounts'].offset == 6856262366 assert page.tags['StripByteCounts'].valueoffset == 6856262534 # assert series properties series = tif.series[0] assert len(series._pages) == 1 assert len(series.pages) == 1 assert series.dataoffset == 16 # contiguous assert series.shape == (1726, 1982) assert series.dtype == numpy.uint16 assert series.axes == 'YX' assert series.kind == 'ome' assert__str__(tif) with TiffFile(fname, is_ome=False) as tif: assert not tif.is_ome # assert series properties series = tif.series[0] assert len(series.pages) == 1000 assert series.dataoffset is None # not contiguous assert series.shape == (1000, 1726, 1982) assert series.dtype == numpy.uint16 assert series.axes == 'IYX' assert series.kind == 'uniform' assert__str__(tif) @pytest.mark.skipif(SKIP_PRIVATE, reason=REASON) def test_read_ome_shape_mismatch(caplog): """Test read OME with page shape mismatch.""" # TCX (20000, 2, 500) is stored in 2 pages of (20000, 500) # probably exported by ZEN Software fname = private_file('OME/Image 7.ome_h00.tiff') with TiffFile(fname) as tif: assert tif.is_ome assert tif.byteorder == '<' assert len(tif.pages) == 2 assert len(tif.series) == 2 assert 'cannot handle discontiguous storage' in caplog.text # assert page properties page = tif.pages.first assert page.is_contiguous assert page.photometric == PHOTOMETRIC.MINISBLACK assert page.imagewidth == 500 assert page.imagelength == 20000 assert page.bitspersample == 16 assert page.samplesperpixel == 1 page = tif.pages[1] assert page.is_contiguous assert page.photometric == PHOTOMETRIC.PALETTE assert page.imagewidth == 500 assert page.imagelength == 20000 assert page.bitspersample == 8 assert page.samplesperpixel == 1 # assert series properties series = tif.series[0] assert series.shape == (20000, 500) assert series.dtype == numpy.uint16 assert series.axes == 'YX' assert series.dataoffset == 8 assert series.kind == 'generic' @pytest.mark.skipif( SKIP_PRIVATE or SKIP_CODECS or not imagecodecs.JPEG2K.available, reason=REASON, ) def test_read_ome_jpeg2000_be(): """Test read JPEG2000 compressed big-endian OME-TIFF.""" fname = private_file('OME/mitosis.jpeg2000.ome.tif') with TiffFile(fname) as tif: assert tif.is_ome assert tif.byteorder == '>' assert len(tif.pages) == 510 assert len(tif.series) == 1 # assert page properties page = tif.pages.first assert not page.is_contiguous assert page.tags['Software'].value[:15] == 'OME Bio-Formats' assert page.compression == COMPRESSION.APERIO_JP2000_YCBC assert page.imagewidth == 171 assert page.imagelength == 196 assert page.bitspersample == 16 assert page.samplesperpixel == 1 # assert series properties series = tif.series[0] assert series.shape == (51, 5, 2, 196, 171) assert series.dtype == numpy.uint16 assert series.axes == 'TZCYX' assert series.kind == 'ome' # assert data data = page.asarray() assert isinstance(data, numpy.ndarray) assert data.shape == (196, 171) assert data.dtype == numpy.uint16 assert data[0, 0] == 1904 assert_aszarr_method(page, data) assert_aszarr_method(page, data, chunkmode='page') assert__str__(tif) @pytest.mark.skipif(SKIP_PRIVATE or SKIP_CODECS, reason=REASON) def test_read_ome_samplesperpixel_mismatch(caplog): """Test read OME with SamplesPerPixel mismatch: OME=1, TIFF=4.""" # https://forum.image.sc/t/ilastik-refuses-to-load-image-file/48541/1 fname = private_file('OME/MismatchSamplesPerPixel.ome.tif') with TiffFile(fname) as tif: assert tif.is_ome assert tif.byteorder == '<' assert len(tif.pages) == 1 assert len(tif.series) == 1 # assert page properties page = tif.pages.first assert page.photometric == PHOTOMETRIC.RGB assert page.compression == COMPRESSION.LZW assert page.imagewidth == 2080 assert page.imagelength == 1552 assert page.bitspersample == 8 assert page.samplesperpixel == 4 # assert series properties series = tif.series[0] assert 'cannot handle discontiguous storage' in caplog.text assert series.shape == (1552, 2080, 4) assert series.dtype == numpy.uint8 assert series.axes == 'YXS' assert series.kind == 'generic' # ome series failed assert not series.is_multifile # assert data data = tif.asarray() assert data.shape == (1552, 2080, 4) assert_aszarr_method(tif, data) assert__str__(tif) @pytest.mark.skipif(SKIP_PUBLIC or SKIP_ZARR, reason=REASON) @pytest.mark.parametrize('chunkmode', [0, 2]) def test_read_ome_multiscale(chunkmode): """Test read pyramidal OME file.""" fname = public_file('tifffile/multiscene_pyramidal.ome.tif') with TiffFile(fname) as tif: assert tif.is_ome assert tif.byteorder == '<' assert len(tif.pages) == 1025 assert len(tif.series) == 2 # assert page properties page = tif.pages.first assert page.photometric == PHOTOMETRIC.MINISBLACK assert page.compression == COMPRESSION.ADOBE_DEFLATE assert page.imagewidth == 256 assert page.imagelength == 256 assert page.bitspersample == 8 assert page.samplesperpixel == 1 # assert series properties series = tif.series[0] assert series.kind == 'ome' assert series.shape == (16, 32, 2, 256, 256) assert series.dtype == numpy.uint8 assert series.axes == 'TZCYX' assert series.is_pyramidal assert not series.is_multifile series = tif.series[1] assert series.kind == 'ome' assert series.shape == (128, 128, 3) assert series.dtype == numpy.uint8 assert series.axes == 'YXS' assert not series.is_pyramidal assert not series.is_multifile # assert data data = tif.asarray() assert data.shape == (16, 32, 2, 256, 256) assert_aszarr_method(tif, data, chunkmode=chunkmode) assert__str__(tif) @pytest.mark.skipif(SKIP_PRIVATE, reason=REASON) def test_read_andor_light_sheet_512p(): """Test read Andor.""" # 12113x13453, uint16 fname = private_file('andor/light sheet 512px.tif') with TiffFile(fname) as tif: assert tif.byteorder == '<' assert len(tif.pages) == 100 assert len(tif.series) == 1 assert tif.is_andor # assert page properties page = tif.pages.first assert page.is_andor assert page.is_contiguous assert page.compression == COMPRESSION.NONE assert page.imagewidth == 512 assert page.imagelength == 512 assert page.bitspersample == 16 assert page.samplesperpixel == 1 # assert metadata t = page.andor_tags assert t['SoftwareVersion'] == '4.23.30014.0' assert t['Frames'] == 100.0 # assert series properties series = tif.series[0] assert series.shape == (100, 512, 512) assert series.dtype == numpy.uint16 assert series.axes == 'IYX' assert series.kind == 'uniform' assert isinstance(series.pages[2], TiffFrame) # assert data data = tif.asarray() assert data.shape == (100, 512, 512) assert data.dtype == numpy.uint16 assert round(abs(data[50, 256, 256] - 703), 7) == 0 assert_aszarr_method(tif, data) assert_aszarr_method(tif, data, chunkmode='page') assert__str__(tif, 0) @pytest.mark.skipif(SKIP_PRIVATE, reason=REASON) def test_read_nih_morph(): """Test read NIH.""" # 388x252 uint8 fname = private_file('nihimage/morph.tiff') with TiffFile(fname) as tif: assert tif.is_nih assert tif.byteorder == '>' assert len(tif.pages) == 1 assert len(tif.series) == 1 # assert page properties page = tif.pages.first assert page.imagewidth == 388 assert page.imagelength == 252 assert page.bitspersample == 8 # assert series properties series = tif.series[0] assert series.kind == 'nih' assert series.shape == (252, 388) assert series.dtype == numpy.uint8 assert series.axes == 'YX' # assert NIH tags tags = tif.nih_metadata assert tags['FileID'] == 'IPICIMAG' assert tags['PixelsPerLine'] == 388 assert tags['nLines'] == 252 assert tags['ForegroundIndex'] == 255 # assert data data = tif.asarray() assert isinstance(data, numpy.ndarray) assert data.shape == (252, 388) assert data.dtype == numpy.uint8 assert data[195, 144] == 41 assert_aszarr_method(tif, data) assert_aszarr_method(tif, data, chunkmode='page') assert__str__(tif) @pytest.mark.skipif(SKIP_PRIVATE, reason=REASON) def test_read_nih_silver_lake(): """Test read NIH palette.""" # 259x187 16 bit palette fname = private_file('nihimage/silver lake.tiff') with TiffFile(fname) as tif: assert tif.is_nih assert tif.byteorder == '>' assert len(tif.pages) == 1 assert len(tif.series) == 1 # assert page properties page = tif.pages.first assert page.is_contiguous assert page.photometric == PHOTOMETRIC.PALETTE assert page.imagewidth == 259 assert page.imagelength == 187 assert page.bitspersample == 8 # assert series properties series = tif.series[0] assert series.kind == 'nih' assert series.shape == (187, 259) assert series.dtype == numpy.uint8 assert series.axes == 'YX' # assert NIH tags tags = tif.nih_metadata assert tags['FileID'] == 'IPICIMAG' assert tags['PixelsPerLine'] == 259 assert tags['nLines'] == 187 assert tags['ForegroundIndex'] == 109 # assert data data = page.asrgb() assert isinstance(data, numpy.ndarray) assert data.shape == (187, 259, 3) assert data.dtype == numpy.uint16 assert tuple(data[86, 102, :]) == (26214, 39321, 39321) assert__str__(tif) @pytest.mark.skipif(SKIP_PRIVATE, reason=REASON) def test_read_nih_scala_media(): """Test read multi-page NIH.""" # 36x54x84 palette fname = private_file('nihimage/scala-media.tif') with TiffFile(fname) as tif: assert tif.is_nih assert tif.byteorder == '>' assert len(tif.pages) == 36 assert len(tif.series) == 1 # assert page properties page = tif.pages.first assert page.is_contiguous assert page.photometric == PHOTOMETRIC.PALETTE assert page.imagewidth == 84 assert page.imagelength == 54 assert page.bitspersample == 8 # assert series properties series = tif.series[0] assert series.kind == 'nih' assert series.shape == (36, 54, 84) assert series.dtype == numpy.uint8 assert series.axes == 'IYX' # assert NIH tags tags = tif.nih_metadata assert tags['Version'] == 160 assert tags['nLines'] == 54 # assert data data = tif.asarray() assert isinstance(data, numpy.ndarray) assert data.shape == (36, 54, 84) assert data.dtype == numpy.uint8 assert data[35, 35, 65] == 171 assert_aszarr_method(tif, data) assert_aszarr_method(tif, data, chunkmode='page') assert__str__(tif) @pytest.mark.skipif(SKIP_PUBLIC or SKIP_CODECS, reason=REASON) def test_read_imagej_rrggbb(): """Test read planar RGB ImageJ file created by Bio-Formats.""" fname = public_file('tifffile/rrggbb.ij.tif') with TiffFile(fname) as tif: assert tif.is_imagej assert tif.byteorder == '<' assert len(tif.pages) == 1 assert len(tif.series) == 1 # assert page properties page = tif.pages.first assert page.photometric == PHOTOMETRIC.RGB assert page.compression == COMPRESSION.LZW assert page.imagewidth == 31 assert page.imagelength == 32 assert page.bitspersample == 16 # assert series properties series = tif.series[0] assert series.kind == 'imagej' assert series.dtype == numpy.uint16 assert series.shape == (3, 32, 31) assert series.axes == 'CYX' assert series.get_shape(False) == (1, 1, 3, 32, 31, 1) assert series.get_axes(False) == 'TZCYXS' assert len(series._pages) == 1 assert len(series.pages) == 1 # assert ImageJ tags ijmeta = tif.imagej_metadata assert ijmeta is not None assert ijmeta['ImageJ'] == '' assert ijmeta['images'] == 3 assert ijmeta['channels'] == 3 assert ijmeta['slices'] == 1 assert ijmeta['frames'] == 1 assert ijmeta['hyperstack'] # assert data data = tif.asarray() assert isinstance(data, numpy.ndarray) assert data.shape == (3, 32, 31) assert data.dtype == numpy.uint16 assert tuple(data[:, 15, 15]) == (812, 1755, 648) assert_decode_method(page) assert_aszarr_method(series, data) assert_aszarr_method(series, data, chunkmode='page') # don't squeeze data = tif.asarray(squeeze=False) assert data.shape == (1, 1, 3, 32, 31, 1) assert_aszarr_method(series, data, squeeze=False) assert__str__(tif, 0) @pytest.mark.skipif(SKIP_PRIVATE, reason=REASON) def test_read_imagej_focal1(): """Test read ImageJ 205x434x425 uint8.""" fname = private_file('imagej/focal1.tif') with TiffFile(fname) as tif: assert tif.is_imagej assert tif.byteorder == '>' assert len(tif.pages) == 205 assert len(tif.series) == 1 # assert page properties page = tif.pages.first assert page.photometric != PHOTOMETRIC.RGB assert page.imagewidth == 425 assert page.imagelength == 434 assert page.bitspersample == 8 assert page.is_contiguous # assert series properties series = tif.series[0] assert series.kind == 'imagej' assert series.dataoffset == 768 assert series.shape == (205, 434, 425) assert series.dtype == numpy.uint8 assert series.axes == 'IYX' assert series.get_shape(False) == (205, 1, 1, 1, 434, 425, 1) assert series.get_axes(False) == 'ITZCYXS' assert len(series._pages) == 1 assert len(series.pages) == 205 # assert ImageJ tags ijmeta = tif.imagej_metadata assert ijmeta is not None assert ijmeta['ImageJ'] == '1.34k' assert ijmeta['images'] == 205 # assert data data = tif.asarray() assert isinstance(data, numpy.ndarray) assert data.shape == (205, 434, 425) assert data.dtype == numpy.uint8 assert data[102, 216, 212] == 120 assert_aszarr_method(series, data) assert_aszarr_method(series, data, chunkmode='page') assert__str__(tif, 0) @pytest.mark.skipif(SKIP_PRIVATE, reason=REASON) def test_read_imagej_hela_cells(): """Test read ImageJ 512x672 RGB uint16.""" fname = private_file('imagej/hela-cells.tif') with TiffFile(fname) as tif: assert tif.is_imagej assert tif.byteorder == '>' assert len(tif.pages) == 1 assert len(tif.series) == 1 # assert page properties page = tif.pages.first assert page.photometric == PHOTOMETRIC.RGB assert page.imagewidth == 672 assert page.imagelength == 512 assert page.bitspersample == 16 assert page.is_contiguous # assert series properties series = tif.series[0] assert series.kind == 'imagej' assert series.shape == (512, 672, 3) assert series.dtype == numpy.uint16 assert series.axes == 'YXS' assert series.get_shape(False) == (1, 1, 1, 512, 672, 3) assert series.get_axes(False) == 'TZCYXS' # assert ImageJ tags ijmeta = tif.imagej_metadata assert ijmeta is not None assert ijmeta['ImageJ'] == '1.46i' assert ijmeta['channels'] == 3 # assert data data = tif.asarray() assert isinstance(data, numpy.ndarray) assert data.shape == (512, 672, 3) assert data.dtype == numpy.uint16 assert tuple(data[255, 336]) == (440, 378, 298) assert_aszarr_method(series, data) assert_aszarr_method(series, data, chunkmode='page') # don't squeeze data = tif.asarray(squeeze=False) assert data.shape == (1, 1, 1, 512, 672, 3) assert_aszarr_method(series, data, squeeze=False) assert__str__(tif) @pytest.mark.skipif(SKIP_PRIVATE, reason=REASON) def test_read_imagej_flybrain(): """Test read ImageJ 57x256x256 RGB.""" fname = private_file('imagej/flybrain.tif') with TiffFile(fname) as tif: assert tif.is_imagej assert tif.byteorder == '>' assert len(tif.pages) == 57 assert len(tif.series) == 1 # hyperstack # assert page properties page = tif.pages.first assert page.photometric == PHOTOMETRIC.RGB assert page.imagewidth == 256 assert page.imagelength == 256 assert page.bitspersample == 8 # assert series properties series = tif.series[0] assert series.kind == 'imagej' assert series.shape == (57, 256, 256, 3) assert series.dtype == numpy.uint8 assert series.axes == 'ZYXS' assert series.get_shape(False) == (1, 57, 1, 256, 256, 3) assert series.get_axes(False) == 'TZCYXS' # assert ImageJ tags ijmeta = tif.imagej_metadata assert ijmeta is not None assert ijmeta['ImageJ'] == '1.43d' assert ijmeta['slices'] == 57 # assert data data = tif.asarray() assert isinstance(data, numpy.ndarray) assert data.shape == (57, 256, 256, 3) assert data.dtype == numpy.uint8 assert tuple(data[18, 108, 97]) == (165, 157, 0) assert_aszarr_method(series, data) assert_aszarr_method(series, data, chunkmode='page') # don't squeeze data = tif.asarray(squeeze=False) assert data.shape == (1, 57, 1, 256, 256, 3) assert_aszarr_method(series, data, squeeze=False) assert__str__(tif) @pytest.mark.skipif(SKIP_PRIVATE, reason=REASON) def test_read_imagej_confocal_series(): """Test read ImageJ 25x2x400x400 ZCYX.""" fname = private_file('imagej/confocal-series.tif') with TiffFile(fname) as tif: assert tif.is_imagej assert tif.byteorder == '>' assert len(tif.pages) == 50 assert len(tif.series) == 1 # hyperstack # assert page properties page = tif.pages.first assert page.imagewidth == 400 assert page.imagelength == 400 assert page.bitspersample == 8 assert page.is_contiguous # assert series properties series = tif.series[0] assert series.kind == 'imagej' assert series.shape == (25, 2, 400, 400) assert series.dtype == numpy.uint8 assert series.axes == 'ZCYX' assert len(series._pages) == 1 assert len(series.pages) == 50 # assert ImageJ tags ijmeta = tif.imagej_metadata assert ijmeta is not None assert ijmeta['ImageJ'] == '1.43d' assert ijmeta['images'] == len(tif.pages) assert ijmeta['channels'] == 2 assert ijmeta['slices'] == 25 assert ijmeta['hyperstack'] # assert data data = tif.asarray() assert isinstance(data, numpy.ndarray) assert data.shape == (25, 2, 400, 400) assert data.dtype == numpy.uint8 assert tuple(data[12, :, 100, 300]) == (6, 66) # assert only two pages are loaded with pytest.warns(DeprecationWarning): assert isinstance(tif.pages.pages[0], TiffPage) assert isinstance(tif.pages._pages[0], TiffPage) if tif.pages.cache: assert isinstance(tif.pages._pages[1], TiffFrame) else: assert tif.pages._pages[1] == 8000911 assert tif.pages._pages[2] == 8001073 assert tif.pages._pages[-1] == 8008687 assert_aszarr_method(series, data) assert_aszarr_method(series, data, chunkmode='page') # don't squeeze data = tif.asarray(squeeze=False) assert data.shape == (1, 25, 2, 400, 400, 1) assert_aszarr_method(series, data, squeeze=False) assert__str__(tif) @pytest.mark.skipif(SKIP_PRIVATE, reason=REASON) def test_read_imagej_graphite(): """Test read ImageJ 1024x593 float32.""" fname = private_file('imagej/graphite1-1024.tif') with TiffFile(fname) as tif: assert tif.is_imagej assert tif.byteorder == '>' assert len(tif.pages) == 1 assert len(tif.series) == 1 # assert page properties page = tif.pages.first assert page.imagewidth == 1024 assert page.imagelength == 593 assert page.bitspersample == 32 assert page.is_contiguous # assert series properties series = tif.series[0] assert series.kind == 'imagej' assert len(series._pages) == 1 assert len(series.pages) == 1 assert series.shape == (593, 1024) assert series.dtype == numpy.float32 assert series.axes == 'YX' # assert ImageJ tags ijmeta = tif.imagej_metadata assert ijmeta is not None assert ijmeta['ImageJ'] == '1.47t' assert round(abs(ijmeta['max'] - 1686.10949707), 7) == 0 assert round(abs(ijmeta['min'] - 852.08605957), 7) == 0 # assert data data = tif.asarray() assert isinstance(data, numpy.ndarray) assert data.shape == (593, 1024) assert data.dtype == numpy.float32 assert round(abs(data[443, 656] - 2203.040771484375), 7) == 0 assert_aszarr_method(series, data) # don't squeeze data = tif.asarray(squeeze=False) assert data.shape == (1, 1, 1, 593, 1024, 1) assert_aszarr_method(series, data, squeeze=False) assert__str__(tif) @pytest.mark.skipif(SKIP_PRIVATE, reason=REASON) def test_read_imagej_bat_cochlea_volume(): """Test read ImageJ 114 images, no frames, slices, channels specified.""" fname = private_file('imagej/bat-cochlea-volume.tif') with TiffFile(fname) as tif: assert tif.is_imagej assert tif.byteorder == '>' assert len(tif.pages) == 114 assert len(tif.series) == 1 # assert page properties page = tif.pages.first assert page.photometric != PHOTOMETRIC.RGB assert page.imagewidth == 121 assert page.imagelength == 154 assert page.bitspersample == 8 assert page.is_contiguous # assert series properties series = tif.series[0] assert series.kind == 'imagej' assert len(series._pages) == 1 assert len(series.pages) == 114 assert series.shape == (114, 154, 121) assert series.dtype == numpy.uint8 assert series.axes == 'IYX' # assert ImageJ tags ijmeta = tif.imagej_metadata assert ijmeta is not None assert ijmeta['ImageJ'] == '1.20n' assert ijmeta['images'] == 114 # assert data data = tif.asarray() assert isinstance(data, numpy.ndarray) assert data.shape == (114, 154, 121) assert data.dtype == numpy.uint8 assert data[113, 97, 61] == 255 assert_aszarr_method(series, data) # don't squeeze data = tif.asarray(squeeze=False) assert data.shape == (114, 1, 1, 1, 154, 121, 1) assert_aszarr_method(series, data, squeeze=False) assert__str__(tif) @pytest.mark.skipif(SKIP_PRIVATE, reason=REASON) def test_read_imagej_first_instar_brain(): """Test read ImageJ 56x256x256x3 ZYXS.""" fname = private_file('imagej/first-instar-brain.tif') with TiffFile(fname) as tif: assert tif.is_imagej assert tif.byteorder == '>' assert len(tif.pages) == 56 assert len(tif.series) == 1 # assert page properties page = tif.pages.first assert page.photometric == PHOTOMETRIC.RGB assert page.imagewidth == 256 assert page.imagelength == 256 assert page.bitspersample == 8 assert page.is_contiguous # assert series properties series = tif.series[0] assert series.kind == 'imagej' assert len(series._pages) == 1 assert len(series.pages) == 56 assert series.shape == (56, 256, 256, 3) assert series.dtype == numpy.uint8 assert series.axes == 'ZYXS' # assert ImageJ tags ijmeta = tif.imagej_metadata assert ijmeta is not None assert ijmeta['ImageJ'] == '1.44j' assert ijmeta['images'] == 56 assert ijmeta['slices'] == 56 # assert data data = tif.asarray() assert isinstance(data, numpy.ndarray) assert data.shape == (56, 256, 256, 3) assert data.dtype == numpy.uint8 assert tuple(data[55, 151, 112]) == (209, 8, 58) assert_aszarr_method(series, data) assert__str__(tif) @pytest.mark.skipif(SKIP_PRIVATE, reason=REASON) def test_read_imagej_fluorescentcells(): """Test read ImageJ three channels.""" fname = private_file('imagej/FluorescentCells.tif') with TiffFile(fname) as tif: assert tif.is_imagej assert tif.byteorder == '>' assert len(tif.pages) == 3 assert len(tif.series) == 1 # assert page properties page = tif.pages.first assert page.photometric == PHOTOMETRIC.PALETTE assert page.imagewidth == 512 assert page.imagelength == 512 assert page.bitspersample == 8 assert page.is_contiguous # assert series properties series = tif.series[0] assert series.kind == 'imagej' assert series.shape == (3, 512, 512) assert series.dtype == numpy.uint8 assert series.axes == 'CYX' # assert ImageJ tags ijmeta = tif.imagej_metadata assert ijmeta is not None assert ijmeta['ImageJ'] == '1.40c' assert ijmeta['images'] == 3 assert ijmeta['channels'] == 3 # assert data data = tif.asarray() assert isinstance(data, numpy.ndarray) assert data.shape == (3, 512, 512) assert data.dtype == numpy.uint8 assert tuple(data[:, 256, 256]) == (57, 120, 13) assert_aszarr_method(series, data) assert__str__(tif) @pytest.mark.skipif(SKIP_PUBLIC or SKIP_LARGE, reason=REASON) def test_read_imagej_100000_pages(): """Test read ImageJ with 100000 pages.""" # 100000x64x64 # file is big endian, memory mapped fname = public_file('tifffile/100000_pages.tif') with TiffFile(fname) as tif: assert tif.is_imagej assert tif.byteorder == '>' assert len(tif.pages) == 100000 assert len(tif.series) == 1 # assert page properties page = tif.pages.first assert page.imagewidth == 64 assert page.imagelength == 64 assert page.bitspersample == 16 assert page.is_contiguous # assert series properties series = tif.series[0] assert series.kind == 'imagej' assert len(series._pages) == 1 assert len(series.pages) == 100000 assert series.shape == (100000, 64, 64) assert series.dtype == numpy.uint16 assert series.axes == 'TYX' # assert ImageJ tags ijmeta = tif.imagej_metadata assert ijmeta is not None assert ijmeta['ImageJ'] == '1.48g' assert round(abs(ijmeta['max'] - 119.0), 7) == 0 assert round(abs(ijmeta['min'] - 86.0), 7) == 0 # assert data data = tif.asarray(out='memmap') assert isinstance(data, numpy.memmap) assert data.shape == (100000, 64, 64) assert data.dtype == numpy.dtype('>u2') assert round(abs(data[7310, 25, 25] - 100), 7) == 0 # too slow: assert_aszarr_method(series, data) assert__str__(tif, 0) del data @pytest.mark.skipif(SKIP_PRIVATE, reason=REASON) def test_read_imagej_invalid_metadata(caplog): """Test read bad ImageJ metadata.""" # file contains 1 page but metadata claims 3500 images # memory map big endian data fname = private_file('sima/0.tif') with TiffFile(fname) as tif: assert tif.is_imagej assert tif.byteorder == '>' assert len(tif.pages) == 1 assert len(tif.series) == 1 assert 'ImageJ series metadata invalid or corrupted' in caplog.text # assert page properties page = tif.pages.first assert page.photometric != PHOTOMETRIC.RGB assert page.imagewidth == 173 assert page.imagelength == 173 assert page.bitspersample == 16 assert page.is_contiguous # assert series properties series = tif.series[0] assert series.kind == 'generic' # imagej series failed assert series.dataoffset == 8 # 8 assert series.shape == (173, 173) assert series.dtype == numpy.uint16 assert series.axes == 'YX' # assert ImageJ tags ijmeta = tif.imagej_metadata assert ijmeta is not None assert ijmeta['ImageJ'] == '1.49i' assert ijmeta['images'] == 3500 # assert data data = tif.asarray(out='memmap') assert isinstance(data, numpy.memmap) assert data.shape == (173, 173) assert data.dtype == numpy.dtype('>u2') assert data[94, 34] == 1257 assert_aszarr_method(series, data) assert_aszarr_method(series, data, chunkmode='page') assert__str__(tif) del data @pytest.mark.skipif(SKIP_PRIVATE, reason=REASON) def test_read_imagej_invalid_hyperstack(): """Test read bad ImageJ hyperstack.""" # file claims to be a hyperstack but is not stored as such # produced by OME writer # reported by Taras Golota on 10/27/2016 fname = private_file('imagej/X0.ome.CTZ.perm.tif') with TiffFile(fname) as tif: assert tif.is_imagej assert tif.byteorder == '<' assert len(tif.pages) == 48 # not a hyperstack assert len(tif.series) == 1 # assert page properties page = tif.pages.first assert page.photometric != PHOTOMETRIC.RGB assert page.imagewidth == 1392 assert page.imagelength == 1040 assert page.bitspersample == 16 assert page.is_contiguous # assert series properties series = tif.series[0] assert series.kind == 'imagej' assert series.dataoffset is None # not contiguous assert series.shape == (2, 4, 6, 1040, 1392) assert series.dtype == numpy.uint16 assert series.axes == 'TZCYX' # assert ImageJ tags ijmeta = tif.imagej_metadata assert ijmeta is not None assert ijmeta['hyperstack'] assert ijmeta['images'] == 48 assert__str__(tif) @pytest.mark.skipif(SKIP_PRIVATE, reason=REASON) def test_read_scifio(): """Test read SCIFIO file using ImageJ metadata.""" # https://github.com/AllenCellModeling/aicsimageio/issues/436 # read fname = private_file('scifio/2MF1P2_glia.tif') with TiffFile(fname) as tif: assert tif.is_imagej assert tif.byteorder == '>' assert len(tif.pages) == 343 # not a hyperstack assert len(tif.series) == 1 # assert page properties page = tif.pages.first assert page.photometric == PHOTOMETRIC.MINISBLACK assert page.imagewidth == 1024 assert page.imagelength == 1024 assert page.bitspersample == 8 assert page.is_contiguous # assert series properties series = tif.series[0] assert series.kind == 'imagej' assert series.dataoffset is None # not contiguous assert series.shape == (343, 1024, 1024) assert series.dtype == numpy.uint8 assert series.axes == 'IYX' assert isinstance(series.pages[2], TiffFrame) # assert ImageJ tags ijmeta = tif.imagej_metadata assert ijmeta is not None assert ijmeta['SCIFIO'] == '0.42.0' assert ijmeta['hyperstack'] assert ijmeta['images'] == 343 # assert data # data = series.asarray() # assert data[192, 740, 420] == 2 # assert_aszarr_method(series, data) assert__str__(tif) @pytest.mark.skipif(SKIP_PRIVATE, reason=REASON) def test_read_fluoview_lsp1_v_laser(): """Test read FluoView CTYX.""" # raises 'UnicodeWarning: Unicode equal comparison failed' on Python 2 fname = private_file('fluoview/lsp1-V-laser0.3-1.tif') with TiffFile(fname) as tif: assert tif.byteorder == '<' assert len(tif.pages) == 100 assert len(tif.series) == 1 assert tif.is_fluoview # assert page properties page = tif.pages.first assert page.is_fluoview assert page.is_contiguous assert page.compression == COMPRESSION.NONE assert page.imagewidth == 256 assert page.imagelength == 256 assert page.bitspersample == 16 assert page.samplesperpixel == 1 # assert metadata m = fluoview_description_metadata(page.description) assert m['Version Info']['FLUOVIEW Version'] == ( 'FV10-ASW ,ValidBitColunt=12' ) assert tuple(m['LUT Ch1'][255]) == (255, 255, 255) mm = tif.fluoview_metadata assert mm['ImageName'] == 'lsp1-V-laser0.3-1.oib' # assert series properties series = tif.series[0] assert series.shape == (2, 50, 256, 256) assert series.dtype == numpy.uint16 assert series.axes == 'CTYX' # assert data data = tif.asarray() assert data.shape == (2, 50, 256, 256) assert data.dtype == numpy.uint16 assert round(abs(data[1, 36, 128, 128] - 824), 7) == 0 assert_aszarr_method(series, data) assert_aszarr_method(series, data, chunkmode='page') assert__str__(tif) @pytest.mark.skipif(SKIP_PRIVATE or SKIP_LARGE, reason=REASON) def test_read_fluoview_120816_bf_f0000(): """Test read FluoView TZYX.""" fname = private_file('fluoview/120816_bf_f0000.tif') with TiffFile(fname) as tif: assert tif.byteorder == '<' assert len(tif.pages) == 864 assert len(tif.series) == 1 assert tif.is_fluoview # assert page properties page = tif.pages.first assert page.is_fluoview assert page.is_contiguous assert page.compression == COMPRESSION.NONE assert page.imagewidth == 1024 assert page.imagelength == 1024 assert page.bitspersample == 16 assert page.samplesperpixel == 1 # assert metadata m = fluoview_description_metadata(page.description) assert m['Environment']['User'] == 'admin' assert m['Region Info (Fields) Field']['Width'] == 1331.2 m = tif.fluoview_metadata assert m['ImageName'] == '120816_bf' # assert series properties series = tif.series[0] assert series.shape == (144, 6, 1024, 1024) assert series.dtype == numpy.uint16 assert series.axes == 'TZYX' # assert data data = tif.asarray() assert data.shape == (144, 6, 1024, 1024) assert data.dtype == numpy.uint16 assert round(abs(data[1, 2, 128, 128] - 8317), 7) == 0 # too slow: assert_aszarr_method(series, data) assert__str__(tif) @pytest.mark.skipif(SKIP_PRIVATE or SKIP_CODECS, reason=REASON) def test_read_metaseries(): """Test read MetaSeries 1040x1392 uint16, LZW.""" # Strips do not contain an EOI code as required by the TIFF spec. fname = private_file('metaseries/metaseries.tif') with TiffFile(fname) as tif: assert tif.byteorder == '<' assert len(tif.pages) == 1 assert len(tif.series) == 1 # assert page properties page = tif.pages.first assert page.imagewidth == 1392 assert page.imagelength == 1040 assert page.bitspersample == 16 # assert metadata assert page.description.startswith('') # assert series properties series = tif.series[0] assert series.shape == (1040, 1392) assert series.dtype == numpy.uint16 assert series.axes == 'YX' assert series.kind == 'uniform' # assert data data = tif.asarray() assert data.shape == (1040, 1392) assert data.dtype == numpy.uint16 assert data[256, 256] == 1917 assert_aszarr_method(series, data) assert_aszarr_method(series, data, chunkmode='page') assert__str__(tif) @pytest.mark.skipif(SKIP_PRIVATE, reason=REASON) def test_read_metaseries_g4d7r(): """Test read Metamorph/Metaseries.""" # 12113x13453, uint16 import uuid fname = private_file('metaseries/g4d7r.tif') with TiffFile(fname) as tif: assert tif.byteorder == '<' assert len(tif.pages) == 1 assert len(tif.series) == 1 assert tif.is_metaseries # assert page properties page = tif.pages.first assert page.is_metaseries assert page.is_contiguous assert page.compression == COMPRESSION.NONE assert page.imagewidth == 13453 assert page.imagelength == 12113 assert page.bitspersample == 16 assert page.samplesperpixel == 1 # assert metadata m = metaseries_description_metadata(page.description) assert m['ApplicationVersion'] == '7.8.6.0' assert m['PlaneInfo']['pixel-size-x'] == 13453 assert m['SetInfo']['number-of-planes'] == 1 assert m['PlaneInfo']['modification-time-local'] == datetime.datetime( 2014, 10, 28, 16, 17, 16, 620000 ) assert m['PlaneInfo']['plane-guid'] == uuid.UUID( '213d9ee7-b38f-4598-9601-6474bf9d0c81' ) # assert series properties series = tif.series[0] assert series.shape == (12113, 13453) assert series.dtype == numpy.uint16 assert series.axes == 'YX' assert series.kind == 'uniform' # assert data data = tif.asarray(out='memmap') assert isinstance(data, numpy.memmap) assert data.shape == (12113, 13453) assert data.dtype == numpy.dtype('' assert len(tif.pages) == 1 assert len(tif.series) == 1 # assert series properties series = tif.series[0] assert series.shape == (1830, 1830) assert series.dtype == numpy.uint16 assert series.axes == 'YX' assert series.kind == 'uniform' # assert page properties page = tif.pages.first assert page.shape == (1830, 1830) assert page.imagewidth == 1830 assert page.imagelength == 1830 assert page.bitspersample == 16 assert page.is_contiguous assert page.tags['65000'].value.startswith( '' ) # assert GeoTIFF tags tags = tif.geotiff_metadata assert tags['GTCitationGeoKey'] == 'WGS 84 / UTM zone 29N' assert tags['ProjectedCSTypeGeoKey'] == 32629 assert_array_almost_equal( tags['ModelTransformation'], [ [60.0, 0.0, 0.0, 6.0e5], [0.0, -60.0, 0.0, 5900040.0], [0.0, 0.0, 0.0, 0.0], [0.0, 0.0, 0.0, 1.0], ], ) assert_aszarr_method(page) assert__str__(tif) @pytest.mark.skipif(SKIP_PRIVATE, reason=REASON) def test_read_geotiff_spaf27_markedcorrect(): """Test read GeoTIFF.""" fname = private_file('geotiff/spaf27_markedcorrect.tif') with TiffFile(fname) as tif: assert tif.is_geotiff assert tif.byteorder == '<' assert len(tif.pages) == 1 assert len(tif.series) == 1 # assert series properties series = tif.series[0] assert series.shape == (20, 20) assert series.dtype == numpy.uint8 assert series.axes == 'YX' assert series.kind == 'uniform' # assert page properties page = tif.pages.first assert page.shape == (20, 20) assert page.imagewidth == 20 assert page.imagelength == 20 assert page.bitspersample == 8 assert page.is_contiguous # assert GeoTIFF tags tags = tif.geotiff_metadata assert tags['GTCitationGeoKey'] == 'NAD27 / California zone VI' assert tags['GeogAngularUnitsGeoKey'] == 9102 assert tags['ProjFalseOriginLatGeoKey'] == 32.1666666666667 assert_array_almost_equal( tags['ModelPixelScale'], [195.509321, 198.32184, 0] ) assert_aszarr_method(page) assert__str__(tif) @pytest.mark.skipif(SKIP_PRIVATE, reason=REASON) def test_read_geotiff_cint16(): """Test read complex integer images.""" fname = private_file('geotiff/cint16.tif') with TiffFile(fname) as tif: assert tif.is_geotiff assert tif.byteorder == '<' assert len(tif.pages) == 1 assert len(tif.series) == 1 # assert page properties page = tif.pages.first assert page.sampleformat == SAMPLEFORMAT.COMPLEXINT assert page.bitspersample == 32 assert page.dtype == numpy.complex64 assert page.shape == (100, 100) assert page.imagewidth == 100 assert page.imagelength == 100 assert page.compression == COMPRESSION.ADOBE_DEFLATE assert not page.is_contiguous data = page.asarray() data[9, 11] == 0 + 0j assert_aszarr_method(page, data) assert__str__(tif) @pytest.mark.skipif(SKIP_PRIVATE, reason=REASON) @pytest.mark.parametrize('bits', [16, 32]) def test_read_complexint(bits): """Test read complex integer images.""" fname = private_file(f'gdal/cint{bits}.tif') with TiffFile(fname) as tif: assert tif.is_geotiff assert tif.byteorder == '<' assert len(tif.pages) == 1 assert len(tif.series) == 1 # assert page properties page = tif.pages.first assert page.sampleformat == SAMPLEFORMAT.COMPLEXINT assert page.bitspersample == bits * 2 assert page.dtype == f'complex{bits * 4}' assert page.shape == (20, 20) assert page.imagewidth == 20 assert page.imagelength == 20 assert not page.is_contiguous data = page.asarray() data[9, 11] == 107 + 0j # assert GeoTIFF tags tags = tif.geotiff_metadata assert tags['GTCitationGeoKey'] == 'NAD27 / UTM zone 11N' assert_aszarr_method(page, data) assert__str__(tif) @pytest.mark.skipif(SKIP_PRIVATE or SKIP_CODECS, reason=REASON) def test_read_qpi(): """Test read PerkinElmer-QPI, non Pyramid.""" fname = private_file('PerkinElmer-QPI/18-2470_2471_Scan1.qptiff') with TiffFile(fname) as tif: assert len(tif.series) == 4 assert len(tif.pages) == 9 assert tif.is_qpi page = tif.pages.first assert page.compression == COMPRESSION.JPEG assert page.photometric == PHOTOMETRIC.RGB assert page.planarconfig == PLANARCONFIG.CONTIG assert page.imagewidth == 34560 assert page.imagelength == 57600 assert page.bitspersample == 8 assert page.samplesperpixel == 3 assert page.tags['Software'].value == 'PerkinElmer-QPI' page = tif.pages[1] assert page.compression == COMPRESSION.LZW assert page.photometric == PHOTOMETRIC.RGB assert page.planarconfig == PLANARCONFIG.CONTIG assert page.imagewidth == 270 assert page.imagelength == 450 assert page.bitspersample == 8 assert page.samplesperpixel == 3 series = tif.series[0] assert series.kind == 'qpi' assert series.name == 'Baseline' assert series.shape == (57600, 34560, 3) assert series.dtype == numpy.uint8 assert series.is_pyramidal assert len(series.levels) == 6 series = tif.series[1] assert series.kind == 'qpi' assert series.name == 'Thumbnail' assert series.shape == (450, 270, 3) assert series.dtype == numpy.uint8 assert not series.is_pyramidal series = tif.series[2] assert series.kind == 'qpi' assert series.name == 'Macro' assert series.shape == (4065, 2105, 3) assert series.dtype == numpy.uint8 assert not series.is_pyramidal series = tif.series[3] assert series.kind == 'qpi' assert series.name == 'Label' assert series.shape == (453, 526, 3) assert series.dtype == numpy.uint8 assert not series.is_pyramidal # assert data image = tif.asarray(series=1) image = tif.asarray(series=2) image = tif.asarray(series=3) image = tif.asarray(series=0, level=4) assert image.shape == (3600, 2160, 3) assert image.dtype == numpy.uint8 assert tuple(image[1200, 1500]) == (244, 233, 229) assert_aszarr_method(tif, image, series=0, level=4) assert__str__(tif) @pytest.mark.skipif(SKIP_PRIVATE or SKIP_CODECS, reason=REASON) def test_read_qpi_nopyramid(): """Test read PerkinElmer-QPI, non Pyramid.""" fname = private_file( 'PerkinElmer-QPI/LuCa-7color_[13860,52919]_1x1component_data.tiff' ) with TiffFile(fname) as tif: assert len(tif.series) == 2 assert len(tif.pages) == 9 assert tif.is_qpi page = tif.pages.first assert page.compression == COMPRESSION.LZW assert page.photometric == PHOTOMETRIC.MINISBLACK assert page.planarconfig == PLANARCONFIG.CONTIG assert page.imagewidth == 1868 assert page.imagelength == 1400 assert page.bitspersample == 32 assert page.samplesperpixel == 1 assert page.tags['Software'].value == 'PerkinElmer-QPI' series = tif.series[0] assert series.kind == 'qpi' assert series.shape == (8, 1400, 1868) assert series.dtype == numpy.float32 assert not series.is_pyramidal series = tif.series[1] assert series.kind == 'qpi' assert series.shape == (350, 467, 3) assert series.dtype == numpy.uint8 assert not series.is_pyramidal # assert data image = tif.asarray() assert image.shape == (8, 1400, 1868) assert image.dtype == numpy.float32 assert image[7, 1200, 1500] == 2.2132580280303955 image = tif.asarray(series=1) assert image.shape == (350, 467, 3) assert image.dtype == numpy.uint8 assert image[300, 400, 1] == 48 assert_aszarr_method(tif, image, series=1) assert_aszarr_method(tif, image, series=1, chunkmode='page') assert__str__(tif) @pytest.mark.skipif(SKIP_PRIVATE, reason=REASON) def test_read_indica(): """Test read Indica Labs pyramid.""" # https://forum.image.sc/t/89191/4 fname = private_file('Indica/mouse_link.tif') with TiffFile(fname) as tif: assert len(tif.series) == 1 assert len(tif.pages) == 40 assert tif.is_indica assert tif.indica_metadata.endswith('') page = tif.pages.first assert page.compression == COMPRESSION.ADOBE_DEFLATE assert page.photometric == PHOTOMETRIC.MINISBLACK assert page.shape == (26836, 18282) assert page.bitspersample == 32 assert page.samplesperpixel == 1 assert page.dtype == numpy.float32 assert page.tags['Software'].value == 'IndicaLabsImageWriter v1.2.1' series = tif.series[0] assert series.kind == 'generic' # 'indica' assert series.axes == 'IYX' # 'CYX' assert series.shape == (8, 26836, 18282) assert series.dtype == numpy.float32 assert len(series.levels) == 5 assert series.is_pyramidal # assert data image = tif.asarray(series=0, level=3) assert image.shape == (8, 3355, 2286) assert image.dtype == numpy.float32 assert_array_almost_equal(image[7, 3000, 1000], 6.0852714) assert_aszarr_method(series, image, level=3) assert_aszarr_method(series, image, level=3, chunkmode='page') assert__str__(tif) @pytest.mark.skipif(SKIP_PRIVATE, reason=REASON) def test_read_avs(): """Test read Argos AVS pyramid.""" # https://github.com/openslide/openslide/issues/614 fname = private_file('ArgosAVS/TestSlide1_ZStack.avs') with TiffFile(fname) as tif: assert len(tif.series) == 3 assert len(tif.pages) == 42 assert tif.is_avs assert tif.avs_metadata.startswith('') page = tif.pages.first assert page.compression == COMPRESSION.JPEG assert page.photometric == PHOTOMETRIC.YCBCR assert page.shape == (57440, 130546, 3) assert page.bitspersample == 8 assert page.samplesperpixel == 3 assert page.dtype == numpy.uint8 series = tif.series[0] assert series.kind == 'avs' assert series.name == 'Baseline' assert series.axes == 'ZYXS' assert series.shape == (5, 57440, 130546, 3) assert series.dtype == numpy.uint8 assert series.is_pyramidal assert len(series.levels) == 8 series = tif.series[1] assert series.kind == 'avs' assert series.name == 'Map' assert series.axes == 'YXS' assert series.shape == (1399, 3180, 3) series = tif.series[2] assert series.kind == 'avs' assert series.name == 'Macro' assert series.axes == 'YXS' assert series.shape == (508, 1489, 3) # assert data series = tif.series[0] image = tif.asarray(series=0, level=4) assert image.shape == (5, 3590, 8159, 3) assert image.dtype == numpy.uint8 assert image[2, 900, 3000, 0] == 218 assert_aszarr_method(series, image, level=4) assert_aszarr_method(series, image, level=4, chunkmode='page') assert__str__(tif) @pytest.mark.skipif( SKIP_PRIVATE or SKIP_CODECS or not imagecodecs.JPEG.available, reason=REASON, ) def test_read_philips(): """Test read Philips DP pyramid.""" # https://camelyon17.grand-challenge.org/Data/ fname = private_file('PhilipsDP/test_001.tif') with TiffFile(fname) as tif: assert len(tif.series) == 1 assert len(tif.pages) == 9 assert tif.is_philips assert tif.philips_metadata.endswith('') page = tif.pages.first assert page.compression == COMPRESSION.JPEG assert page.photometric == PHOTOMETRIC.YCBCR assert page.planarconfig == PLANARCONFIG.CONTIG assert page.tags['ImageWidth'].value == 86016 assert page.tags['ImageLength'].value == 89600 assert page.imagewidth == 86016 assert page.imagelength == 89600 assert page.bitspersample == 8 assert page.samplesperpixel == 3 assert page.tags['Software'].value == 'Philips DP v1.0' series = tif.series[0] assert series.kind == 'philips' assert series.shape == (89600, 86016, 3) assert len(series.levels) == 9 assert series.is_pyramidal assert series.levels[1].shape == (44800, 43008, 3) assert series.levels[2].shape == (22400, 21504, 3) assert series.levels[3].shape == (11200, 10752, 3) assert series.levels[4].shape == (5600, 5376, 3) assert series.levels[5].shape == (2800, 2688, 3) assert series.levels[6].shape == (1400, 1344, 3) assert series.levels[7].shape == (700, 672, 3) assert series.levels[8].shape == (350, 336, 3) page = series.levels[1].keyframe assert page.compression == COMPRESSION.JPEG assert page.photometric == PHOTOMETRIC.YCBCR assert page.planarconfig == PLANARCONFIG.CONTIG assert page.tags['ImageWidth'].value == 43008 assert page.tags['ImageLength'].value == 45056 assert page.imagewidth == 43008 assert page.imagelength == 44800 for level in range(1, 9): tif.asarray(series=0, level=level) # assert data image = tif.asarray(series=0, level=5) assert image.shape == (2800, 2688, 3) assert image[300, 400, 1] == 206 assert_aszarr_method(series, image, level=5) assert_aszarr_method(series, image, level=5, chunkmode='page') assert__str__(tif) @pytest.mark.skipif( SKIP_PRIVATE or SKIP_CODECS or not imagecodecs.JPEG.available, reason=REASON, ) def test_read_philips_issue249(): """Test write_fsspec with Philips slide missing row of tiles.""" # https://github.com/cgohlke/tifffile/issues/249 fname = private_file('PhilipsDP/patient_080_node_2.tif') with TiffFile(fname) as tif: assert len(tif.series) == 2 assert len(tif.pages) == 11 assert tif.is_philips assert tif.philips_metadata.endswith('') page = tif.pages.first assert page.compression == COMPRESSION.JPEG assert page.photometric == PHOTOMETRIC.YCBCR assert page.planarconfig == PLANARCONFIG.CONTIG assert page.tags['ImageWidth'].value == 155136 assert page.tags['ImageLength'].value == 78336 assert page.imagewidth == 155136 assert page.imagelength == 78336 assert page.bitspersample == 8 assert page.samplesperpixel == 3 assert page.tags['Software'].value == 'Philips DP v1.0' series = tif.series[1] assert series.kind == 'philips' assert series.name == 'Macro' assert series.shape == (801, 1756, 3) assert len(series.levels) == 1 assert not series.is_pyramidal series = tif.series[0] assert series.kind == 'philips' assert series.name == 'Baseline' assert series.shape == (78336, 155136, 3) assert len(series.levels) == 10 assert series.is_pyramidal page = series.levels[3].keyframe assert page.compression == COMPRESSION.JPEG assert page.photometric == PHOTOMETRIC.YCBCR assert page.planarconfig == PLANARCONFIG.CONTIG assert page.tags['ImageWidth'].value == 19456 assert page.tags['ImageLength'].value == 9728 assert page.imagewidth == 19392 assert page.imagelength == 9792 for level in range(1, 10): tif.asarray(series=0, level=level) # assert data image = tif.asarray(series=0, level=3) assert image.shape == (9792, 19392, 3) assert image[300, 400, 1] == 226 assert image[9791, 0, 1] == 0 assert_aszarr_method(series, image, level=3) assert__str__(tif) if SKIP_ZARR: return # write fsspec from imagecodecs.numcodecs import register_codecs register_codecs('imagecodecs_jpeg', verbose=False) url = os.path.dirname(fname).replace('\\', '/') with TempFileName('issue_philips_fsspec', ext='.json') as jsonfile: page.aszarr().write_fsspec(jsonfile, url, version=0) mapper = fsspec.get_mapper( 'reference://', fo=jsonfile, target_protocol='file', remote_protocol='file', ) zobj = zarr.open(mapper, mode='r') assert_array_equal(zobj[:], image) @pytest.mark.skipif( SKIP_PRIVATE or SKIP_CODECS or not imagecodecs.JPEG.available, reason=REASON, ) def test_read_philips_issue253(): """Test read Philips DP pyramid with seemingly extra column of tiles.""" # https://github.com/cgohlke/tifffile/issues/253 # https://registry.opendata.aws/camelyon/ fname = private_file('PhilipsDP/sample.tiff') with TiffFile(fname) as tif: assert len(tif.series) == 3 assert len(tif.pages) == 12 assert tif.is_philips assert tif.philips_metadata.endswith('') page = tif.pages.first assert page.compression == COMPRESSION.JPEG assert page.photometric == PHOTOMETRIC.RGB assert page.planarconfig == PLANARCONFIG.CONTIG assert page.tags['ImageWidth'].value == 188416 assert page.tags['ImageLength'].value == 93696 assert page.imagewidth == 188416 assert page.imagelength == 93696 assert page.bitspersample == 8 assert page.samplesperpixel == 3 assert page.tags['Software'].value == 'Philips DP v1.0' series = tif.series[1] assert series.kind == 'philips' assert series.name == 'Macro' assert series.shape == (812, 1806, 3) assert not series.is_pyramidal series = tif.series[2] assert series.kind == 'philips' assert series.name == 'Label' assert series.shape == (812, 671, 3) assert not series.is_pyramidal series = tif.series[0] assert series.kind == 'philips' assert series.name == 'Baseline' assert series.shape == (93696, 188416, 3) assert len(series.levels) == 10 assert series.is_pyramidal assert series.levels[1].shape == (46848, 94208, 3) assert series.levels[2].shape == (23424, 47104, 3) assert series.levels[3].shape == (11712, 23552, 3) assert series.levels[4].shape == (5856, 11776, 3) assert series.levels[5].shape == (2928, 5888, 3) assert series.levels[6].shape == (1464, 2944, 3) assert series.levels[7].shape == (732, 1472, 3) assert series.levels[8].shape == (366, 736, 3) assert series.levels[9].shape == (183, 368, 3) page = series.levels[1].keyframe assert page.compression == COMPRESSION.JPEG assert page.photometric == PHOTOMETRIC.RGB assert page.planarconfig == PLANARCONFIG.CONTIG assert page.tags['ImageWidth'].value == 94208 assert page.tags['ImageLength'].value == 47104 assert page.imagewidth == 94208 assert page.imagelength == 46848 for level in range(1, 10): tif.asarray(series=0, level=level) # assert data image = tif.asarray(series=0, level=5) assert image.shape == (2928, 5888, 3) assert image[300, 400, 1] == 254 assert_aszarr_method(series, image, level=5) assert_aszarr_method(series, image, level=5, chunkmode='page') assert__str__(tif) @pytest.mark.skipif( SKIP_PRIVATE or SKIP_CODECS or not imagecodecs.JPEG.available, reason=REASON, ) def test_read_zif(): """Test read Zoomable Image Format ZIF.""" fname = private_file('zif/ZoomifyImageExample.zif') with TiffFile(fname) as tif: # assert tif.is_zif assert len(tif.pages) == 5 assert len(tif.series) == 1 for page in tif.pages: assert page.description == ( 'Created by Objective ' 'Pathology Services' ) # first page page = tif.pages.first assert page.photometric == PHOTOMETRIC.YCBCR assert page.compression == COMPRESSION.JPEG assert page.shape == (3120, 2080, 3) assert tuple(page.asarray()[3110, 2070, :]) == (27, 45, 59) # page 4 page = tif.pages[-1] assert page.photometric == PHOTOMETRIC.YCBCR assert page.compression == COMPRESSION.JPEG assert page.shape == (195, 130, 3) assert tuple(page.asarray()[191, 127, :]) == (30, 49, 66) # series series = tif.series[0] assert series.kind == 'generic' assert series.is_pyramidal assert len(series.levels) == 5 assert series.shape == (3120, 2080, 3) assert tuple(series.asarray()[3110, 2070, :]) == (27, 45, 59) assert series.levels[-1].shape == (195, 130, 3) assert tuple(series.asarray(level=-1)[191, 127, :]) == (30, 49, 66) assert_aszarr_method(series, level=-1) assert__str__(tif) @pytest.mark.skipif(SKIP_PRIVATE or SKIP_CODECS, reason=REASON) def test_read_vsi(): """Test read Olympus VSI.""" fname = private_file('VSI/brightfield.vsi') with TiffFile(fname) as tif: assert not tif.is_sis assert tif.byteorder == '<' assert len(tif.pages) == 5 assert len(tif.series) == 5 # assert page properties page = tif.pages.first assert not page.is_contiguous assert page.imagewidth == 991 assert page.imagelength == 375 assert page.bitspersample == 8 assert page.samplesperpixel == 3 assert page.tags['Artist'].value == 'ics' # assert data data = tif.asarray() assert data.shape == (375, 991, 3) assert data[200, 256, 1] == 3 # assert metadata sis = tif.sis_metadata assert sis['magnification'] == 1.0 assert_aszarr_method(tif, data) assert_aszarr_method(tif, data, chunkmode='page') assert__str__(tif) @pytest.mark.skipif(SKIP_PRIVATE, reason=REASON) def test_read_sis(): """Test read Olympus SIS.""" fname = private_file('sis/4A5IE8EM_F00000409.tif') with TiffFile(fname) as tif: assert tif.is_sis assert tif.byteorder == '<' assert len(tif.pages) == 122 assert len(tif.series) == 1 # assert page properties page = tif.pages.first assert page.is_contiguous assert page.imagewidth == 353 assert page.imagelength == 310 assert page.bitspersample == 16 assert page.samplesperpixel == 1 assert page.tags['Software'].value == 'analySIS 5.0' # assert data data = tif.asarray() assert data.shape == (61, 2, 310, 353) assert data[30, 1, 256, 256] == 210 # assert metadata sis = tif.sis_metadata assert sis is not None assert sis['axes'] == 'TC' assert sis['shape'] == (61, 2) assert sis['Band'][1]['BandName'] == 'Fura380' assert sis['Band'][0]['LUT'].shape == (256, 3) assert sis['Time']['TimePos'].shape == (61,) assert sis['name'] == 'Hela-Zellen' assert sis['magnification'] == 60.0 assert_aszarr_method(tif, data) assert_aszarr_method(tif, data, chunkmode='page') assert__str__(tif) @pytest.mark.skipif(SKIP_PRIVATE, reason=REASON) def test_read_sis_noini(): """Test read Olympus SIS without INI tag.""" fname = private_file('sis/110.tif') with TiffFile(fname) as tif: assert tif.is_sis assert tif.byteorder == '<' assert len(tif.pages) == 1 assert len(tif.series) == 1 # assert page properties page = tif.pages.first assert page.imagewidth == 2560 assert page.imagelength == 1920 assert page.bitspersample == 8 assert page.samplesperpixel == 3 # assert metadata sis = tif.sis_metadata assert 'axes' not in sis assert sis['magnification'] == 20.0 assert__str__(tif) @pytest.mark.skipif(SKIP_PRIVATE, reason=REASON) def test_read_sem_metadata(): """Test read Zeiss SEM metadata.""" # file from hyperspy tests fname = private_file('hyperspy/test_tiff_Zeiss_SEM_1k.tif') with TiffFile(fname) as tif: assert tif.is_sem assert tif.byteorder == '<' assert len(tif.pages) == 1 assert len(tif.series) == 1 # assert page properties page = tif.pages.first assert page.is_contiguous assert page.photometric == PHOTOMETRIC.PALETTE assert page.imagewidth == 1024 assert page.imagelength == 768 assert page.bitspersample == 8 assert page.samplesperpixel == 1 # assert data and metadata data = page.asrgb() assert tuple(data[563, 320]) == (38550, 38550, 38550) sem = tif.sem_metadata assert sem[''][3] == 2.614514e-06 assert sem['ap_date'] == ('Date', '23 Dec 2015') assert sem['ap_time'] == ('Time', '9:40:32') assert sem['dp_image_store'] == ('Store resolution', '1024 * 768') assert sem['ap_fib_fg_emission_actual'] == ( 'Flood Gun Emission Actual', 0.0, 'µA', ) assert_aszarr_method(tif) assert__str__(tif) @pytest.mark.skipif(SKIP_PRIVATE, reason=REASON) def test_read_sem_bad_metadata(): """Test read Zeiss SEM metadata with wrong length.""" # reported by Klaus Schwarzburg on 8/27/2018 fname = private_file('issues/sem_bad_metadata.tif') with TiffFile(fname) as tif: assert tif.is_sem assert tif.byteorder == '<' assert len(tif.pages) == 1 assert len(tif.series) == 1 # assert page properties page = tif.pages.first assert page.is_contiguous assert page.photometric == PHOTOMETRIC.PALETTE assert page.imagewidth == 1024 assert page.imagelength == 768 assert page.bitspersample == 8 assert page.samplesperpixel == 1 # assert data and metadata data = page.asrgb() assert tuple(data[350, 150]) == (17476, 17476, 17476) sem = tif.sem_metadata assert sem['sv_version'][1] == 'V05.07.00.00 : 08-Jul-14' assert_aszarr_method(tif) assert__str__(tif) @pytest.mark.skipif(SKIP_PRIVATE, reason=REASON) def test_read_fei_metadata(): """Test read Helios FEI metadata.""" # file from hyperspy tests fname = private_file('hyperspy/test_tiff_FEI_SEM.tif') with TiffFile(fname) as tif: assert tif.is_fei assert tif.byteorder == '<' assert len(tif.pages) == 1 assert len(tif.series) == 1 # assert page properties page = tif.pages.first assert page.is_contiguous assert page.photometric != PHOTOMETRIC.PALETTE assert page.imagewidth == 1536 assert page.imagelength == 1103 assert page.bitspersample == 8 assert page.samplesperpixel == 1 # assert data and metadata data = page.asarray() assert data[563, 320] == 220 fei = tif.fei_metadata assert fei['User']['User'] == 'supervisor' assert fei['System']['DisplayHeight'] == 0.324 assert_aszarr_method(tif) assert__str__(tif) @pytest.mark.skipif(SKIP_PRIVATE or SKIP_LARGE or SKIP_ZARR, reason=REASON) def test_read_mmstack_multifile(caplog): """Test read MicroManager 2.0 multi-file, multi-position dataset.""" # TODO: what version of MicroManager does not write corrupted files? # second ImageDescription tag value is beyond 4 GB # MicroManager headers are beyond 4 GB # MicroManager display settings are truncated fname = private_file('MMStack/NDTiff.index/_4_MMStack_Pos0.ome.tif') with TiffFile(fname) as tif: assert 'coercing invalid ASCII to bytes' in caplog.text assert tif.is_micromanager assert tif.is_mmstack assert tif.is_ome assert not tif.is_imagej assert not tif.is_ndtiff assert tif.byteorder == '<' assert len(tif.pages) == 8092 assert len(tif.series) == 1 assert 'failed to read display settings' not in caplog.text assert 'failed to read comments: invalid header' not in caplog.text # assert metadata meta = tif.micromanager_metadata assert meta is not None assert meta['MajorVersion'] == 0 assert meta['Summary']['MicroManagerVersion'] == '2.0.0' assert meta['Summary']['Prefix'] == '_4' assert meta['IndexMap'].shape == (8092, 5) assert 'Comments' in meta # assert series properties series = tif.series[0] assert len(series) == 17472 assert series.shape == (91, 2, 48, 2, 512, 512) assert series.axes == 'TRZCYX' assert series.kind == 'mmstack' assert series.is_multifile # assert data data = tif.asarray() assert isinstance(data, numpy.ndarray) assert data.shape == (91, 2, 48, 2, 512, 512) assert data.dtype == numpy.uint16 assert data[48, 1, 23, 1, 253, 257] == 1236 # assert_aszarr_method(series, data) # takes 2 minutes with series.aszarr() as store: data = zarr.open(store, mode='r') assert data[48, 1, 23, 1, 253, 257] == 1236 # test OME; ImageJ can't handle multi-file or positions assert_array_equal(data[:, 0], imread(fname, is_mmstack=False)) assert__str__(tif) @pytest.mark.skipif(SKIP_PRIVATE or SKIP_LARGE, reason=REASON) def test_read_mmstack_mosaic(caplog): """Test read MicroManager 1.4 mosaic dataset.""" fname = private_file( 'MMStack/mosaic/d220708_HybISS_AS_cycles1to5_NoBridgeProbes_dim3x3__3' '_MMStack_2-Pos_000_001.ome.tif' ) with TiffFile(fname) as tif: assert 'coercing invalid ASCII to bytes' not in caplog.text assert tif.is_micromanager assert tif.is_mmstack assert tif.is_ome assert tif.is_imagej assert not tif.is_ndtiff assert tif.byteorder == '<' assert len(tif.pages) == 55 assert len(tif.series) == 1 # assert metadata meta = tif.micromanager_metadata assert meta is not None assert meta['MajorVersion'] == 0 assert meta['Summary']['MicroManagerVersion'] == '1.4.24 20220315' assert meta['Summary']['Prefix'] == ( 'd220708_HybISS_AS_cycles1to5_NoBridgeProbes_dim3x3__3' ) assert meta['IndexMap'].shape == (55, 5) assert 'Comments' in meta # assert series properties series = tif.series[0] assert len(series) == 495 assert series.shape == (9, 11, 5, 1040, 1388) assert series.axes == 'RZCYX' assert series.kind == 'mmstack' assert series.is_multifile # assert data data = tif.asarray() assert isinstance(data, numpy.ndarray) assert data.shape == (9, 11, 5, 1040, 1388) assert data.dtype == numpy.uint16 assert data[7, 9, 3, 753, 1257] == 90 assert_aszarr_method(series, data) assert__str__(tif) @pytest.mark.skipif(SKIP_PRIVATE or SKIP_LARGE, reason=REASON) def test_read_mmstack_single(): """Test read MicroManager single-file multi-region dataset.""" fname = private_file( 'MMStack/181003_multi_pos_time_course_1_MMStack.ome.tif' ) with TiffFile(fname) as tif: assert tif.is_micromanager assert tif.is_mmstack assert tif.is_ome assert tif.is_imagej assert not tif.is_ndtiff assert tif.byteorder == '<' assert len(tif.pages) == 20 assert len(tif.series) == 1 # assert metadata meta = tif.micromanager_metadata assert meta is not None assert meta['MajorVersion'] == 0 assert meta['Summary']['MicroManagerVersion'] == '2.0.0-beta3 20160512' assert meta['Summary']['Prefix'] == '181003_multi_pos_time_course_1' assert meta['IndexMap'].shape == (20, 5) assert meta['Comments']['0_0_4_1'] == '' # assert series properties series = tif.series[0] assert len(series) == 20 assert series.shape == (10, 2, 256, 256) assert series.axes == 'TRYX' assert series.kind == 'mmstack' assert not series.is_multifile # assert data data = tif.asarray() assert isinstance(data, numpy.ndarray) assert data.shape == (10, 2, 256, 256) assert data.dtype == numpy.uint16 assert data[7, 1, 111, 222] == 6991 assert_aszarr_method(series, data) # test OME; ImageJ can't handle positions assert_array_equal(data[:, 0], imread(fname, is_mmstack=False)) assert__str__(tif) @pytest.mark.skipif(SKIP_PRIVATE or SKIP_LARGE, reason=REASON) def test_read_mmstack_missing(caplog): """Test read MicroManager missing files and pages in dataset.""" fname = private_file('MMStack/movie_9_MMStack.ome.tif') with TiffFile(fname) as tif: assert tif.is_micromanager assert tif.is_mmstack assert tif.is_ome assert tif.is_imagej assert not tif.is_ndtiff assert tif.byteorder == '<' assert len(tif.pages) == 126 assert len(tif.series) == 1 assert 'MMStack series is missing files' in caplog.text assert 'MMStack is missing 1 page' in caplog.text # assert metadata meta = tif.micromanager_metadata assert meta is not None assert meta['MajorVersion'] == 0 assert meta['Summary']['MicroManagerVersion'] == '1.4.16 20140128' assert meta['Summary']['Prefix'] == 'movie_9' assert meta['IndexMap'].shape == (125, 5) assert meta['Comments'] == {'Summary': ''} assert meta['DisplaySettings'][0]['Name'] == 'Dual-GFP' # assert series properties series = tif.series[0] assert series[-1] is None # missing page assert len(series) == 126 assert series.shape == (63, 2, 264, 320) assert series.axes == 'TCYX' assert series.kind == 'mmstack' assert not series.is_multifile # assert data data = tif.asarray() assert isinstance(data, numpy.ndarray) assert data.shape == (63, 2, 264, 320) assert data.dtype == numpy.uint16 assert data[59, 1, 151, 186] == 599 # assert zarr if not SKIP_ZARR and zarr is not None: with series.aszarr(fillvalue=100) as store: assert '1.1.0.0' in store assert '62.1.0.0' not in store # missing page z = zarr.open(store, mode='r') assert z[62, 1, 0, 0] == 100 assert_array_equal(data[:62], z[:62]) # test OME and ImageJ assert_array_equal(data, imread(fname, is_mmstack=False)) assert_array_equal(data, imread(fname, is_mmstack=False, is_ome=False)) assert__str__(tif) @pytest.mark.skipif(SKIP_PRIVATE or SKIP_LARGE, reason=REASON) def test_read_mmstack_bytesio(caplog): """Test read MicroManager missing data in BytesIO.""" fname = private_file('MMStack/movie_9_MMStack.ome.tif') with open(fname, 'rb') as fh: bytesio = BytesIO(fh.read()) with TiffFile(bytesio) as tif: assert tif.is_micromanager assert tif.is_mmstack assert tif.is_ome assert tif.is_imagej assert not tif.is_ndtiff assert not tif.filehandle.is_file assert tif.byteorder == '<' assert len(tif.pages) == 126 assert len(tif.series) == 1 assert 'MMStack series is missing files' in caplog.text assert 'MMStack is missing 1 page' in caplog.text # assert metadata meta = tif.micromanager_metadata assert meta is not None assert meta['MajorVersion'] == 0 assert meta['Summary']['MicroManagerVersion'] == '1.4.16 20140128' assert meta['Summary']['Prefix'] == 'movie_9' assert meta['IndexMap'].shape == (125, 5) assert meta['Comments'] == {'Summary': ''} assert meta['DisplaySettings'][0]['Name'] == 'Dual-GFP' # assert series properties series = tif.series[0] assert len(series) == 126 assert series.shape == (63, 2, 264, 320) assert series.axes == 'TCYX' assert series.kind == 'mmstack' assert not series.is_multifile # assert data data = tif.asarray() assert isinstance(data, numpy.ndarray) assert data.shape == (63, 2, 264, 320) assert data.dtype == numpy.uint16 assert data[59, 1, 151, 186] == 599 assert_aszarr_method(series, data) assert__str__(tif) @pytest.mark.skipif(SKIP_PRIVATE or SKIP_LARGE, reason=REASON) def test_read_mmstack_missing_sbs(caplog): """Test read MicroManager dataset with missing data.""" # https://github.com/cgohlke/tifffile/issues/187 fname = private_file('MMStack/10X_c1-SBS-1_A1_Tile-102.sbs.tif') with TiffFile(fname) as tif: assert tif.is_micromanager assert tif.is_mmstack assert tif.is_ome assert tif.is_imagej assert not tif.is_ndtiff assert tif.byteorder == '<' assert len(tif.pages) == 5 assert len(tif.series) == 1 assert 'MMStack file name is invalid' in caplog.text assert 'MMStack series is missing files' in caplog.text # assert metadata meta = tif.micromanager_metadata assert meta is not None assert meta['MajorVersion'] == 0 assert meta['Summary']['MicroManagerVersion'].startswith('1.4.23 2019') assert meta['Summary']['Prefix'] == '10X_c1-SBS-1_1' assert meta['IndexMap'].shape == (5, 5) assert meta['Comments']['Summary'] == '' assert meta['DisplaySettings'][0]['Name'] == 'DAPI_10p' # assert series properties series = tif.series[0] assert len(series) == 5 assert series.shape == (5, 1024, 1024) assert series.axes == 'CYX' assert series.kind == 'mmstack' assert not series.is_multifile # assert data data = tif.asarray() assert isinstance(data, numpy.ndarray) assert data.shape == (5, 1024, 1024) assert data.dtype == numpy.uint16 assert data[3, 151, 186] == 542 assert_aszarr_method(series, data) # test OME assert_array_equal(data, imread(fname, is_mmstack=False)) assert__str__(tif) @pytest.mark.skipif(SKIP_PRIVATE or SKIP_LARGE, reason=REASON) def test_read_mmstack_trzc(): """Test read MicroManager 6 dimensional dataset.""" fname = private_file( 'MMStack' '/image_stack_tpzc_50tp_2p_5z_3c_512k_1_MMStack_2-Pos000_000.ome.tif' ) with TiffFile(fname) as tif: assert tif.is_micromanager assert tif.is_mmstack assert tif.is_ome assert tif.is_imagej assert not tif.is_ndtiff assert tif.byteorder == '<' assert len(tif.pages) == 750 assert len(tif.series) == 1 # assert metadata meta = tif.micromanager_metadata assert meta is not None assert meta['MajorVersion'] == 0 assert meta['Summary']['MicroManagerVersion'].startswith('2.0.0-gamma') assert meta['Summary']['Prefix'] == ( 'image_stack_tpzc_50tp_2p_5z_3c_512k_1' ) assert meta['IndexMap'].shape == (750, 5) assert meta['Comments']['Summary'] == '' assert 'DisplaySettings' not in meta # assert series properties series = tif.series[0] assert len(series) == 1500 assert series.shape == (50, 2, 5, 3, 256, 256) assert series.axes == 'TRZCYX' assert series.kind == 'mmstack' assert series.is_multifile # assert data data = tif.asarray() assert isinstance(data, numpy.ndarray) assert data.shape == (50, 2, 5, 3, 256, 256) assert data.dtype == numpy.uint16 assert data[27, 1, 3, 2, 151, 186] == 16 assert_aszarr_method(series, data) # test OME assert_array_equal( data[:, 1], imread(fname, is_mmstack=False, series=1) ) assert__str__(tif) @pytest.mark.skipif(SKIP_PRIVATE, reason=REASON) def test_read_ndtiff_magellanstack(): """Test read NDTiffStorage/MagellanStack.""" # https://github.com/cgohlke/tifffile/issues/23 fname = private_file( 'NDTiffStorage/MagellanStack/Full resolution/democam_MagellanStack.tif' ) with TiffFile(fname) as tif: assert tif.is_micromanager assert len(tif.pages) == 12 # with pytest.warns(UserWarning): assert tif.micromanager_metadata is not None assert 'Comments' not in tif.micromanager_metadata meta = tif.pages[-1].tags['MicroManagerMetadata'].value assert meta['Axes']['repetition'] == 2 assert meta['Axes']['exposure'] == 3 # NDTiff v0 and v1 series are not supported series = tif.series[0] assert series.kind == 'uniform' # not 'ndtiff' assert series.dtype == numpy.uint8 assert series.shape == (12, 512, 512) assert series.axes == 'IYX' data = series.asarray() assert data[8, 100, 100] == 164 assert_aszarr_method(tif, data) assert__str__(tif) @pytest.mark.skipif(SKIP_PRIVATE, reason=REASON) def test_read_ndtiff_v2(): """Test read NDTiffStorage v2.""" fname = private_file( 'NDTiffStorage/v2/ndtiffv2.0_test/Full resolution' '/ndtiffv2.0_test_NDTiffStack.tif' ) with TiffFile(fname) as tif: assert tif.is_micromanager assert tif.is_ndtiff meta = tif.pages[-1].tags['MicroManagerMetadata'].value assert meta['Axes'] == {'channel': 1, 'time': 4} meta = tif.micromanager_metadata assert meta is not None assert meta['MajorVersion'] == 2 assert meta['Summary']['PixelType'] == 'GRAY16' series = tif.series[0] assert series.kind == 'ndtiff' assert series.dtype == numpy.uint16 assert series.shape == (5, 2, 32, 32) assert series.axes == 'TCYX' data = series.asarray() if not SKIP_NDTIFF: ndt = ndtiff.Dataset(os.path.join(os.path.dirname(fname), '..')) try: assert_array_equal( data, ndt.as_array(axes=['time', 'channel']) ) finally: ndt.close() assert_aszarr_method(tif, data) assert__str__(tif) @pytest.mark.skipif(SKIP_PRIVATE, reason=REASON) def test_read_ndtiff_tiled(): """Test read NDTiffStorage v2 tiled.""" fname = private_file( 'NDTiffStorage/v2/ndtiffv2.0_stitched_test/Full resolution' '/ndtiffv2.0_stitched_test_NDTiffStack.tif' ) with TiffFile(fname) as tif: assert tif.is_micromanager assert tif.is_ndtiff meta = tif.pages[-1].tags['MicroManagerMetadata'].value assert meta['Axes'] == {'channel': 0, 'column': 0, 'row': 0} meta = tif.micromanager_metadata assert meta is not None assert meta['MajorVersion'] == 2 assert meta['Summary']['PixelType'] == 'GRAY16' series = tif.series[0] assert series.kind == 'ndtiff' assert series.dtype == numpy.uint16 assert series.shape == (2, 2, 32, 32) assert series.axes == 'JKYX' data = series.asarray() if not SKIP_NDTIFF: ndt = ndtiff.Dataset(os.path.join(os.path.dirname(fname), '..')) try: assert_array_equal(data, ndt.as_array(axes=['row', 'column'])) finally: ndt.close() assert_aszarr_method(tif, data) assert__str__(tif) @pytest.mark.skipif(SKIP_PRIVATE, reason=REASON) def test_read_ndtiff_v3(): """Test read NDTiffStorage v3.""" fname = private_file( 'NDTiffStorage/v3/ndtiffv3.0_test/ndtiffv3.0_test_NDTiffStack.tif' ) with TiffFile(fname) as tif: assert tif.is_micromanager assert tif.is_ndtiff meta = tif.pages[-1].tags['MicroManagerMetadata'].value assert meta['Axes'] == {'channel': 1, 'time': 4} meta = tif.micromanager_metadata assert meta is not None assert meta['MajorVersion'] == 3 assert meta['MinorVersion'] == 0 assert meta['Summary']['PixelType'] == 'GRAY16' series = tif.series[0] assert series.kind == 'ndtiff' assert series.dtype == numpy.uint16 assert series.shape == (5, 2, 32, 32) assert series.axes == 'TCYX' data = series.asarray() if not SKIP_NDTIFF: ndt = ndtiff.Dataset(os.path.dirname(fname)) try: assert_array_equal( data, ndt.as_array(axes=['time', 'channel']) ) finally: ndt.close() assert_aszarr_method(tif, data) assert__str__(tif) @pytest.mark.skipif(SKIP_PRIVATE, reason=REASON) def test_read_ndtiff_tcz(): """Test read NDTiffStorage v3 with squeezed axes.""" fname = private_file( 'NDTiffStorage/v3/mm_mda_tcz_15/mm_mda_tcz_15_NDTiffStack.tif' ) with TiffFile(fname) as tif: assert tif.is_micromanager assert tif.is_ndtiff meta = tif.pages[-1].tags['MicroManagerMetadata'].value assert 'Axes' not in meta # missing? # expected {'channel': 1, 'z': 6, 'position': 0, 'time': 7} meta = tif.micromanager_metadata assert meta is not None assert meta['MajorVersion'] == 3 assert meta['MinorVersion'] == 0 assert meta['Summary']['PixelType'] == 'GRAY16' series = tif.series[0] assert series.kind == 'ndtiff' assert series.dtype == numpy.uint16 assert series.get_shape(False) == (1, 8, 2, 7, 512, 512) assert series.get_axes(False) == 'RTCZYX' data = series.asarray(squeeze=True) assert data.shape == (8, 2, 7, 512, 512) if not SKIP_NDTIFF: ndt = ndtiff.Dataset(os.path.dirname(fname)) try: # axes=['position', 'time', 'channel', 'z'] assert_array_equal(data, numpy.squeeze(ndt.as_array())) finally: ndt.close() assert_aszarr_method(tif, data) assert__str__(tif) @pytest.mark.skipif(SKIP_PRIVATE, reason=REASON) def test_read_ndtiff_bytesio(caplog): """Test read NDTiffStorage v3 with BytesIO.""" fname = private_file( 'NDTiffStorage/v3/mm_mda_tcz_15/mm_mda_tcz_15_NDTiffStack.tif' ) with open(fname, 'rb') as fh: bytesio = BytesIO(fh.read()) with TiffFile(bytesio) as tif: assert tif.is_micromanager assert tif.is_ndtiff meta = tif.pages[-1].tags['MicroManagerMetadata'].value assert 'Axes' not in meta # missing? # expected {'channel': 1, 'z': 6, 'position': 0, 'time': 7} meta = tif.micromanager_metadata assert meta is not None assert meta['MajorVersion'] == 3 assert meta['MinorVersion'] == 0 assert meta['Summary']['PixelType'] == 'GRAY16' series = tif.series[0] assert 'NDTiff.index not found for' in caplog.text assert series.kind == 'generic' # not ndtiff assert series.dtype == numpy.uint16 assert series.get_shape(False) == (112, 512, 512, 1) assert series.get_axes(False) == 'IYXS' data = series.asarray(squeeze=True) assert data.shape == (112, 512, 512) assert_aszarr_method(tif, data) assert__str__(tif) @pytest.mark.skipif(SKIP_PRIVATE, reason=REASON) def test_read_ndtiff_multichannel(): """Test read NDTiffStorage v3 with channel names.""" fname = private_file( 'NDTiffStorage/v3/ndtiff3.2_multichannel' '/NDTiff3.2_multichannel_NDTiffStack.tif' ) with TiffFile(fname) as tif: assert tif.is_micromanager assert tif.is_ndtiff meta = tif.pages[-1].tags['MicroManagerMetadata'].value assert meta['Axes'] == {'channel': 'FITC', 'z': 15, 'time': 7} meta = tif.micromanager_metadata assert meta is not None assert meta['MajorVersion'] == 3 assert meta['MinorVersion'] == 2 assert meta['Summary']['PixelType'] == 'GRAY16' series = tif.series[0] assert series.kind == 'ndtiff' assert series.dtype == numpy.uint16 assert series.shape == (8, 2, 16, 64, 64) assert series.axes == 'TCZYX' data = series.asarray() if not SKIP_NDTIFF: ndt = ndtiff.Dataset(os.path.dirname(fname)) try: assert_array_equal( data, ndt.as_array(axes=['time', 'channel', 'z']) ) finally: ndt.close() assert_aszarr_method(tif, data) assert__str__(tif) @pytest.mark.skipif(SKIP_PUBLIC or SKIP_ZARR, reason=REASON) def test_read_zarr(): """Test read TIFF with zarr.""" fname = public_file('imagecodecs/gray.u1.tif') with TiffFile(fname) as tif: image = tif.asarray() store = tif.aszarr() try: data = zarr.open(store, mode='r') assert_array_equal(image, data) del data finally: store.close() @pytest.mark.skipif(SKIP_PUBLIC or SKIP_ZARR, reason=REASON) def test_read_zarr_lrucache(): """Test read TIFF with zarr LRUStoreCache.""" # fails with zarr 2.15/16 # https://github.com/zarr-developers/zarr-python/issues/1497 fname = public_file('imagecodecs/gray.u1.tif') with TiffFile(fname) as tif: image = tif.asarray() store = tif.aszarr() try: cache = zarr.LRUStoreCache(store, max_size=2**10) data = zarr.open(cache, mode='r') assert_array_equal(image, data) del data finally: store.close() @pytest.mark.skipif(SKIP_PUBLIC or SKIP_ZARR, reason=REASON) def test_read_zarr_multifile(): """Test read multifile OME-TIFF with zarr.""" fname = public_file('OME/multifile/multifile-Z1.ome.tiff') with TiffFile(fname) as tif: image = tif.asarray() store = tif.aszarr() try: data = zarr.open(store, mode='r') assert_array_equal(image, data) del data finally: store.close() @pytest.mark.skipif(SKIP_PUBLIC or SKIP_ZARR, reason=REASON) @pytest.mark.parametrize('multiscales', [None, False, True]) def test_read_zarr_multiscales(multiscales): """Test Zarr store multiscales parameter.""" fname = public_file('tifffile/multiscene_pyramidal.ome.tif') with TiffFile(fname) as tif: page = tif.pages[1] series = tif.series[0] assert series.kind == 'ome' image = page.asarray() with page.aszarr(multiscales=multiscales) as store: z = zarr.open(store, mode='r') if multiscales: assert isinstance(z, zarr.Group) assert_array_equal(z[0][:], image) else: assert isinstance(z, zarr.Array) assert_array_equal(z[:], image) del z with series.aszarr(multiscales=multiscales) as store: z = zarr.open(store, mode='r') if multiscales or multiscales is None: assert isinstance(z, zarr.Group) assert_array_equal(z[0][0, 0, 1], image) else: assert isinstance(z, zarr.Array) assert_array_equal(z[0, 0, 1], image) del z @pytest.mark.skipif(SKIP_PUBLIC or SKIP_ZARR, reason=REASON) def test_read_zarr_level(): """Test Zarr store of level.""" fname = public_file('tifffile/multiscene_pyramidal.ome.tif') data = imread(fname, key=1, series=0, level=2) store = imread(fname, key=1, series=0, level=2, aszarr=True) z = zarr.open(store, mode='r') image = z[:] del z assert_array_equal(image, data) @pytest.mark.skipif(SKIP_PRIVATE or SKIP_CODECS, reason=REASON) def test_read_eer(caplog): """Test read EER metadata.""" # https://github.com/fei-company/EerReaderLib/issues/1 fname = private_file('EER/Example_1.eer') with TiffFile(fname) as tif: assert not caplog.text # no warning assert tif.is_bigtiff assert tif.is_eer assert tif.byteorder == '<' assert len(tif.pages) == 238 assert len(tif.series) == 1 # assert page properties page = tif.pages.first assert not page.is_contiguous assert page.photometric == PHOTOMETRIC.MINISBLACK assert page.compression == 65001 assert page.imagewidth == 4096 assert page.imagelength == 4096 assert page.bitspersample == 1 assert page.samplesperpixel == 1 meta = tif.eer_metadata assert meta.startswith('') # assert data data = page.asarray() assert data.dtype == '?' assert data[428, 443] assert not data[428, 444] assert_aszarr_method(page, data) assert__str__(tif) if not SKIP_ZARR: try: from imagecodecs.numcodecs import register_codecs except ImportError: return register_codecs('imagecodecs_eer', verbose=False) filename = os.path.split(fname)[-1] url = URL + 'test/private/EER/' with TempFileName(filename, ext='.json') as jsonfile: with page.aszarr() as store: store.write_fsspec(jsonfile, url) # if this fails add ".eer" as "image/tiff" to mime types assert_fsspec(URL + os.path.split(jsonfile)[-1], data) @pytest.mark.skipif(SKIP_PRIVATE, reason=REASON) def test_read_astrotiff(caplog): """Test read AstroTIFF with FITS metadata.""" # https://astro-tiff.sourceforge.io/ fname = private_file('AstroTIFF/NGC2024_astro-tiff_sample_48bit.tif') with TiffFile(fname) as tif: assert not caplog.text # no warning assert tif.is_astrotiff assert tif.byteorder == '<' assert len(tif.pages) == 1 assert len(tif.series) == 1 # assert page properties page = tif.pages.first assert not page.is_contiguous assert page.photometric == PHOTOMETRIC.RGB assert page.compression == COMPRESSION.ADOBE_DEFLATE assert page.imagewidth == 3040 assert page.imagelength == 2016 assert page.bitspersample == 16 assert page.samplesperpixel == 3 # assert data and metadata assert tuple(page.asarray()[545, 1540]) == (10401, 11804, 12058) meta = tif.astrotiff_metadata assert meta['SIMPLE'] assert meta['APTDIA'] == 100.0 assert meta['APTDIA:COMMENT'] == 'Aperture diameter of telescope in mm' assert__str__(tif) @pytest.mark.skipif(SKIP_PRIVATE, reason=REASON) def test_read_streak(): """Test read Hamamatus Streak file.""" fname = private_file('HamamatsuStreak/hamamatsu_streak.tif') with TiffFile(fname) as tif: assert tif.is_streak assert tif.byteorder == '<' assert len(tif.pages) == 1 assert len(tif.series) == 1 # assert page properties page = tif.pages.first assert page.is_contiguous assert page.photometric == PHOTOMETRIC.MINISBLACK assert page.imagewidth == 672 assert page.imagelength == 508 assert page.bitspersample == 16 assert page.samplesperpixel == 1 # assert data and metadata assert page.asarray()[277, 341] == 47 meta = tif.streak_metadata assert meta['Application']['SoftwareVersion'] == '9.5 pf4' assert meta['Acquisition']['areSource'] == (0, 0, 672, 508) assert meta['Camera']['Prop_InternalLineInterval'] == 9.74436e-06 assert meta['Camera']['Prop_OutputTriggerPeriod_2'] == 0.000001 assert meta['Camera']['HWidth'] == 672 assert meta['DisplayLUT']['EntrySize'] == 4 assert meta['Spectrograph']['Front Ent. Slitw.'] == 0 assert meta['Scaling']['ScalingYScalingFile'] == 'Focus mode' xscale = meta['Scaling']['ScalingXScaling'] assert xscale.size == 672 assert xscale[0] == 231.09092712402344 assert xscale[-1] == 242.59259033203125 assert__str__(tif) @pytest.mark.skipif(SKIP_PRIVATE, reason=REASON) def test_read_agilent(): """Test read Agilent Technologies file.""" fname = private_file('Agilent/SG11410002_253174651388_S001.tif') with TiffFile(fname) as tif: assert tif.is_agilent assert not tif.is_mdgel assert len(tif.pages) == 4 assert len(tif.series) == 2 # assert page properties page = tif.pages.first assert page.is_contiguous assert page.photometric == PHOTOMETRIC.MINISBLACK assert page.imagewidth == 20334 assert page.imagelength == 7200 assert page.bitspersample == 16 assert page.samplesperpixel == 1 assert page.tags[285].value == 'Red' assert page.tags[37702].value == 'Unknown' # assert data series = tif.series[0] assert series.shape == (2, 7200, 20334) assert series.asarray()[1, 277, 341] == 24 series = tif.series[1] assert series.shape == (2, 2400, 6778) assert series.asarray()[1, 277, 341] == 634 assert__str__(tif) @pytest.mark.skipif(SKIP_PUBLIC or SKIP_ZARR, reason=REASON) @pytest.mark.parametrize('chunkmode', [0, 2]) def test_read_selection(chunkmode): """Test read selection via imread.""" fname = public_file('tifffile/multiscene_pyramidal.ome.tif') selection = (8, slice(16, 17), slice(None), slice(51, 99), slice(51, 99)) # series 0 assert_array_equal( imread(fname)[8, 16:17, :, 51:99, 51:99], imread(fname, selection=selection, chunkmode=chunkmode), ) # level 1 assert_array_equal( imread(fname, series=0, level=1)[8, 16:17, :, 51:99, 51:99], imread( fname, series=0, level=1, selection=selection, chunkmode=chunkmode ), ) # page 99 assert_array_equal( imread(fname, key=99)[51:99, 51:99], imread( fname, key=99, selection=(slice(51, 99), slice(51, 99)), chunkmode=chunkmode, ), ) # series 1 assert_array_equal( imread(fname, series=1)[51:99, 51:99], imread( fname, series=1, selection=(slice(51, 99), slice(51, 99)), chunkmode=chunkmode, ), ) @pytest.mark.skipif(SKIP_PUBLIC or SKIP_ZARR, reason=REASON) @pytest.mark.parametrize('out', [None, 'empty', 'memmap', 'name']) def test_read_selection_out(out): """Test read selection via imread into out.""" # https://github.com/cgohlke/tifffile/pull/222 fname = public_file('tifffile/multiscene_pyramidal.ome.tif') selection = (8, slice(16, 17), slice(None), slice(51, 99), slice(51, 99)) expected = imread(fname)[8, 16:17, :, 51:99, 51:99] if out is None: # new array image = imread(fname, selection=selection, out=None) elif out == 'empty': # existing array image = numpy.empty_like(expected) imread(fname, selection=selection, out=image) elif out == 'memmap': # memmap in temp dir image = imread(fname, selection=selection, out='memmap') assert isinstance(image, numpy.memmap) elif out == 'name': # memmap in specified file with TempFileName('read_selection_out', ext='.memmap') as fileout: image = imread(fname, selection=selection, out=fileout) assert isinstance(image, numpy.memmap) assert_array_equal(image, expected) @pytest.mark.skipif(SKIP_PRIVATE or SKIP_CODECS, reason=REASON) def test_read_selection_filesequence(): """Test read selection from file sequence via imread.""" fname = private_files('TiffSequence/*.tif') assert_array_equal( imread(fname)[5:8, 51:99, 51:99], imread(fname, selection=(slice(5, 8), slice(51, 99), slice(51, 99))), ) def test_read_xarray_page_properties(): """Test read TiffPage xarray properties.""" dtype = numpy.uint8 resolution = (1.1, 2.2) with TempFileName('read_xarray_page_properties') as fname: with TiffWriter(fname) as tif: # gray tif.write( shape=(33, 31), dtype=dtype, resolution=resolution, photometric='minisblack', ) # RGB tif.write( shape=(33, 31, 3), dtype=dtype, resolution=resolution, photometric='rgb', ) # RGBA tif.write( shape=(33, 31, 4), dtype=dtype, resolution=resolution, photometric='rgb', ) # CMYK tif.write( shape=(33, 31, 4), dtype=dtype, resolution=resolution, photometric='separated', ) # gray with extrasamples tif.write( shape=(33, 31, 5), dtype=dtype, resolution=resolution, photometric='minisblack', planarconfig='contig', ) # RRGGBB tif.write( shape=(3, 33, 31), dtype=dtype, resolution=resolution, photometric='rgb', planarconfig='separate', ) # depth tif.write( shape=(7, 33, 31), dtype=dtype, resolution=resolution, photometric='minisblack', volumetric=True, ) xcoords = numpy.linspace( 0, 31 / resolution[0], 31, endpoint=False, dtype=numpy.float32 ) ycoords = numpy.linspace( 0, 33 / resolution[1], 33, endpoint=False, dtype=numpy.float32 ) # zcoords = numpy.linspace( # 0, 7 / 1, 7, endpoint=False, dtype=numpy.float32 # ) with TiffFile(fname) as tif: # gray page = tif.pages.first assert page.name == 'TiffPage 0' assert page.shape == (33, 31) assert page.ndim == 2 assert page.axes == 'YX' assert page.dims == ('height', 'width') assert page.sizes == {'height': 33, 'width': 31} assert_array_equal(page.coords['height'], ycoords) assert_array_equal(page.coords['width'], xcoords) assert page.attr == {} # RGB page = tif.pages[1] assert page.name == 'TiffPage 1' assert page.shape == (33, 31, 3) assert page.ndim == 3 assert page.axes == 'YXS' assert page.dims == ('height', 'width', 'sample') assert page.sizes == {'height': 33, 'width': 31, 'sample': 3} assert_array_equal( page.coords['sample'], numpy.array(['Red', 'Green', 'Blue']) ) assert_array_equal(page.coords['height'], ycoords) assert_array_equal(page.coords['width'], xcoords) # RGBA page = tif.pages[2] assert page.name == 'TiffPage 2' assert page.shape == (33, 31, 4) assert page.ndim == 3 assert page.axes == 'YXS' assert page.dims == ('height', 'width', 'sample') assert page.sizes == {'height': 33, 'width': 31, 'sample': 4} assert_array_equal( page.coords['sample'], numpy.array(['Red', 'Green', 'Blue', 'Unassalpha']), ) assert_array_equal(page.coords['height'], ycoords) assert_array_equal(page.coords['width'], xcoords) # CMYK page = tif.pages[3] assert page.name == 'TiffPage 3' assert page.shape == (33, 31, 4) assert page.ndim == 3 assert page.axes == 'YXS' assert page.dims == ('height', 'width', 'sample') assert page.sizes == {'height': 33, 'width': 31, 'sample': 4} assert_array_equal( page.coords['sample'], numpy.array(['Cyan', 'Magenta', 'Yellow', 'Black']), ) assert_array_equal(page.coords['height'], ycoords) assert_array_equal(page.coords['width'], xcoords) # gray with extrasamples page = tif.pages[4] assert page.name == 'TiffPage 4' assert page.shape == (33, 31, 5) assert page.ndim == 3 assert page.axes == 'YXS' assert page.dims == ('height', 'width', 'sample') assert page.sizes == {'height': 33, 'width': 31, 'sample': 5} assert_array_equal( page.coords['sample'], numpy.arange(5), ) assert_array_equal(page.coords['height'], ycoords) assert_array_equal(page.coords['width'], xcoords) # RRGGBB page = tif.pages[5] assert page.name == 'TiffPage 5' assert page.shape == (3, 33, 31) assert page.ndim == 3 assert page.axes == 'SYX' assert page.dims == ('sample', 'height', 'width') assert page.sizes == {'sample': 3, 'height': 33, 'width': 31} assert_array_equal( page.coords['sample'], numpy.array(['Red', 'Green', 'Blue']) ) assert_array_equal(page.coords['height'], ycoords) assert_array_equal(page.coords['width'], xcoords) # depth page = tif.pages[6] assert page.name == 'TiffPage 6' assert page.shape == (7, 33, 31) assert page.ndim == 3 assert page.axes == 'ZYX' assert page.dims == ('depth', 'height', 'width') assert page.sizes == {'depth': 7, 'height': 33, 'width': 31} assert_array_equal(page.coords['depth'], numpy.arange(7)) assert_array_equal(page.coords['height'], ycoords) assert_array_equal(page.coords['width'], xcoords) ############################################################################### # Test TiffWriter WRITE_DATA = numpy.arange(3 * 219 * 301).astype(numpy.uint16) WRITE_DATA.shape = (3, 219, 301) # type: ignore[assignment] @pytest.mark.skipif(SKIP_EXTENDED, reason=REASON) @pytest.mark.parametrize( 'shape', [ (219, 301), (219, 301, 2), (219, 301, 3), (219, 301, 4), (2, 219, 301), (3, 219, 301), (4, 219, 301), (5, 219, 301), (4, 3, 219, 301), (4, 219, 301, 3), (3, 4, 219, 301), (3, 4, 219, 301, 1), ], ) @pytest.mark.parametrize( 'compression', [None] # , 'zlib', 'lzw', 'lzma', 'zstd', 'packbits'] ) @pytest.mark.parametrize('dtype', list('?bhiqefdBHIQFD')) @pytest.mark.parametrize('byteorder', ['>', '<']) @pytest.mark.parametrize('bigtiff', ['plaintiff', 'bigtiff']) @pytest.mark.parametrize('tile', [None, (64, 64)]) @pytest.mark.parametrize('data', ['random', None]) def test_write(data, byteorder, bigtiff, compression, dtype, shape, tile): """Test TiffWriter with various options.""" if compression is not None and (data is None or SKIP_CODECS): pytest.xfail(REASON) fname = 'write_{}_{}_{}_{}{}{}{}'.format( bigtiff, {'<': 'le', '>': 'be'}[byteorder], numpy.dtype(dtype).name, str(shape).replace(' ', ''), '_tiled' if tile is not None else '', '_empty' if data is None else '', f'_{compression}' if compression is not None else '', ) bigtiff = bigtiff == 'bigtiff' if (3 in shape or 4 in shape) and shape[-1] != 1 and dtype != '?': photometric = 'rgb' else: photometric = None with TempFileName(fname) as fname: if data is None: with TiffWriter( fname, byteorder=byteorder, bigtiff=bigtiff ) as tif: if dtype == '?': # cannot write non-contiguous empty file with pytest.raises(ValueError): tif.write( shape=shape, dtype=dtype, tile=tile, photometric=photometric, ) return tif.write( shape=shape, dtype=dtype, tile=tile, photometric=photometric, ) assert__repr__(tif) with TiffFile(fname) as tif: assert__str__(tif) image = tif.asarray() else: data = random_data(dtype, shape) imwrite( fname, data, byteorder=byteorder, bigtiff=bigtiff, tile=tile, photometric=photometric, compression=compression, ) image = imread(fname) assert image.flags['C_CONTIGUOUS'] assert_array_equal(data.squeeze(), image.squeeze()) if not SKIP_ZARR: with imread(fname, aszarr=True) as store: data = zarr.open(store, mode='r') assert_array_equal(data, image) assert shape == image.shape assert dtype == image.dtype if not bigtiff: assert_valid_tiff(fname) def test_write_deprecated_planarconfig(): """Test deprecated planarconfig.""" with TempFileName('write_deprecated_planarconfig_contig') as fname: with pytest.warns(DeprecationWarning): imwrite(fname, shape=(31, 33, 3), dtype=numpy.float32) with TiffFile(fname) as tif: page = tif.pages.first assert page.photometric == PHOTOMETRIC.RGB assert page.planarconfig == PLANARCONFIG.CONTIG with TempFileName('write_deprecated_planarconfig_separate') as fname: with pytest.warns(DeprecationWarning): imwrite(fname, shape=(3, 31, 33), dtype=numpy.float32) with TiffFile(fname) as tif: page = tif.pages.first assert page.photometric == PHOTOMETRIC.RGB assert page.planarconfig == PLANARCONFIG.SEPARATE def test_write_ycbcr_subsampling(caplog): """Test write YCBCR.""" with TempFileName('write_ycbcr_subsampling') as fname: imwrite( fname, shape=(31, 33, 3), dtype=numpy.uint8, photometric=PHOTOMETRIC.YCBCR, subsampling=(1, 2), ) assert 'cannot apply subsampling' in caplog.text with TiffFile(fname) as tif: page = tif.pages.first assert page.photometric == PHOTOMETRIC.YCBCR assert page.subsampling == (1, 1) assert 'ReferenceBlackWhite' in page.tags @pytest.mark.parametrize('samples', [0, 1, 2]) def test_write_invalid_samples(samples): """Test TiffWriter with invalid options.""" data = numpy.zeros((16, 16, samples) if samples else (16, 16), numpy.uint8) fname = f'write_invalid_samples{samples}' with TempFileName(fname) as fname: with pytest.raises(ValueError): imwrite(fname, data, photometric='rgb') @pytest.mark.skipif(SKIP_PUBLIC or SKIP_CODECS, reason=REASON) @pytest.mark.parametrize('tile', [False, True]) @pytest.mark.parametrize( 'codec', [ 'adobe_deflate', 'lzma', 'lzw', 'packbits', 'zstd', 'webp', 'png', 'jpeg', 'jpegxl', 'jpegxr', 'jpeg2000', ], ) @pytest.mark.parametrize('mode', ['gray', 'rgb', 'planar']) def test_write_codecs(mode, tile, codec): """Test write various compression.""" if mode in {'gray', 'planar'} and codec == 'webp': pytest.xfail("WebP doesn't support grayscale or planar mode") level = {'webp': -1, 'jpeg': 99}.get(codec, None) if level: compressionargs = {'level': level} else: compressionargs = None tile = (16, 16) if tile else None data = numpy.load(public_file('tifffile/rgb.u1.npy')) if mode == 'rgb': photometric = PHOTOMETRIC.RGB planarconfig = PLANARCONFIG.CONTIG elif mode == 'planar': photometric = PHOTOMETRIC.RGB planarconfig = PLANARCONFIG.SEPARATE data = numpy.moveaxis(data, -1, 0).copy() else: planarconfig = None photometric = PHOTOMETRIC.MINISBLACK data = data[..., :1].copy() data = numpy.repeat(data[numpy.newaxis], 3, axis=0) data[1] = 255 - data[1] shape = data.shape with TempFileName( 'write_codecs_{}_{}{}'.format(mode, codec, '_tile' if tile else '') ) as fname: imwrite( fname, data, compression=codec, compressionargs=compressionargs, tile=tile, photometric=photometric, planarconfig=planarconfig, subsampling=(1, 1), ) assert_valid_tiff(fname) with TiffFile(fname) as tif: assert len(tif.pages) == shape[0] page = tif.pages.first assert not page.is_contiguous assert page.compression == enumarg(COMPRESSION, codec) assert page.photometric in {photometric, PHOTOMETRIC.YCBCR} if planarconfig is not None: assert page.planarconfig == planarconfig assert page.imagewidth == 31 assert page.imagelength == 32 assert page.samplesperpixel == 1 if mode == 'gray' else 3 # samplesperpixel = page.samplesperpixel image = tif.asarray() if codec == 'jpeg': assert_allclose(data, image, atol=10) else: assert_array_equal(data, image) assert_decode_method(page) assert__str__(tif) if ( imagecodecs.TIFF.available and codec not in {'png', 'jpegxr', 'jpeg2000', 'jpegxl'} and mode != 'planar' ): im = imagecodecs.imread(fname, index=None) # if codec == 'jpeg': # # tiff_decode returns JPEG compressed TIFF as RGBA # im = numpy.squeeze(im[..., :samplesperpixel]) assert_array_equal(im, numpy.squeeze(image)) @pytest.mark.skipif(SKIP_PUBLIC, reason=REASON) @pytest.mark.parametrize('mode', ['gray', 'rgb', 'planar']) @pytest.mark.parametrize('tile', [False, True]) @pytest.mark.parametrize( 'dtype', ['u1', 'u2', 'u4', 'i1', 'i2', 'i4', 'f2', 'f4', 'f8'] ) @pytest.mark.parametrize('byteorder', ['>', '<']) def test_write_predictor(byteorder, dtype, tile, mode): """Test predictors.""" if dtype[0] == 'f' and SKIP_CODECS: pytest.xfail('requires imagecodecs') tile = (32, 32) if tile else None f4 = imread(public_file('tifffile/gray.f4.tif')) if mode == 'rgb': photometric = PHOTOMETRIC.RGB planarconfig = PLANARCONFIG.CONTIG data = numpy.empty((83, 111, 3), 'f4') data[..., 0] = f4 data[..., 1] = f4[::-1] data[..., 2] = f4[::-1, ::-1] elif mode == 'planar': photometric = PHOTOMETRIC.RGB planarconfig = PLANARCONFIG.SEPARATE data = numpy.empty((3, 83, 111), 'f4') data[0] = f4 data[1] = f4[::-1] data[2] = f4[::-1, ::-1] else: planarconfig = None photometric = PHOTOMETRIC.MINISBLACK data = f4 if dtype[0] in 'if': data -= 0.5 if dtype in 'u1i1': data *= 255 elif dtype in 'i2u2': data *= 2**12 elif dtype in 'i4u4': data *= 2**21 else: data *= 3.145 data = data.astype(byteorder + dtype) with TempFileName( 'write_predictor_{}_{}_{}{}'.format( dtype, 'be' if byteorder == '>' else 'le', mode, '_tile' if tile else '', ) ) as fname: imwrite( fname, data, predictor=True, compression=COMPRESSION.ADOBE_DEFLATE, tile=tile, photometric=photometric, planarconfig=planarconfig, byteorder=byteorder, ) assert_valid_tiff(fname) with TiffFile(fname) as tif: assert tif.tiff.byteorder == byteorder assert len(tif.pages) == 1 page = tif.pages.first assert not page.is_contiguous assert page.compression == COMPRESSION.ADOBE_DEFLATE assert page.predictor == (3 if dtype[0] == 'f' else 2) assert page.photometric == photometric if planarconfig is not None: assert page.planarconfig == planarconfig assert page.imagewidth == 111 assert page.imagelength == 83 assert page.samplesperpixel == 1 if mode == 'gray' else 3 # samplesperpixel = page.samplesperpixel image = tif.asarray() assert_array_equal(data, image) assert_decode_method(page) assert__str__(tif) if not SKIP_CODECS and imagecodecs.TIFF.available: im = imagecodecs.imread(fname, index=None) assert_array_equal(im, numpy.squeeze(image)) @pytest.mark.parametrize('bytecount', [16, 256]) @pytest.mark.parametrize('count', [1, 2, 4]) @pytest.mark.parametrize('compression', [0, 6]) @pytest.mark.parametrize('tiled', [0, 1]) @pytest.mark.parametrize('bigtiff', [0, 1]) def test_write_bytecount(bigtiff, tiled, compression, count, bytecount): """Test write bytecount formats.""" if tiled: tag = 'TileByteCounts' rowsperstrip = None tile = (bytecount, bytecount) shape = { 1: (bytecount, bytecount), 2: (bytecount * 2, bytecount), 4: (bytecount * 2, bytecount * 2), }[count] is_contiguous = count != 4 and compression == 0 else: tag = 'StripByteCounts' tile = None rowsperstrip = bytecount shape = (bytecount * count, bytecount) is_contiguous = compression == 0 data = random_data(numpy.uint8, shape) if count == 1: dtype = DATATYPE.LONG8 if bigtiff else DATATYPE.LONG elif bytecount == 256: dtype = DATATYPE.LONG else: dtype = DATATYPE.SHORT with TempFileName( f'write_bytecounts_{bigtiff}{tiled}{compression}{count}{bytecount}' ) as fname: imwrite( fname, data, bigtiff=bigtiff, tile=tile, compression=COMPRESSION.ADOBE_DEFLATE if compression else None, compressionargs={'level': compression} if compression else None, rowsperstrip=rowsperstrip, ) if not bigtiff: assert_valid_tiff(fname) with TiffFile(fname) as tif: assert len(tif.pages) == 1 page = tif.pages.first assert page.tags[tag].count == count assert page.tags[tag].dtype == dtype assert page.is_contiguous == is_contiguous assert page.planarconfig == PLANARCONFIG.CONTIG assert page.photometric == PHOTOMETRIC.MINISBLACK assert page.imagewidth == shape[1] assert page.imagelength == shape[0] assert page.samplesperpixel == 1 assert_array_equal(page.asarray(), data) assert_aszarr_method(page, data) assert__str__(tif) @pytest.mark.skipif(SKIP_EXTENDED, reason=REASON) @pytest.mark.parametrize('repeat', [1, 4]) @pytest.mark.parametrize('shape', [(1, 0), (0, 1), (3, 0, 2, 1)]) @pytest.mark.parametrize('data', ['random', 'empty']) @pytest.mark.parametrize('shaped', [True, False]) def test_write_zeroshape(shaped, data, repeat, shape): """Test write arrays with zero shape.""" dtype = numpy.uint8 fname = 'write_shape_{}x{}{}{}'.format( repeat, str(shape).replace(' ', ''), '_shaped' if shaped else '', '_empty' if data == 'empty' else '', ) metadata = {} if shaped else None with TempFileName(fname) as fname: if data == 'empty': with TiffWriter(fname) as tif: with pytest.warns(UserWarning): for _ in range(repeat): tif.write( shape=shape, dtype=dtype, contiguous=True, metadata=metadata, ) tif.write(numpy.zeros((16, 16), 'u2'), metadata=metadata) with TiffFile(fname) as tif: assert__str__(tif) image = zimage = tif.asarray() if not SKIP_ZARR: zimage = zarr.open(tif.aszarr(), mode='r') else: data = random_data(dtype, shape) with TiffWriter(fname) as tif: with pytest.warns(UserWarning): for _ in range(repeat): tif.write(data, contiguous=True, metadata=metadata) tif.write(numpy.zeros((16, 16), 'u2'), metadata=metadata) with TiffFile(fname) as tif: assert__str__(tif) image = zimage = tif.asarray() if not SKIP_ZARR: zimage = zarr.open(tif.aszarr(), mode='r') assert image.flags['C_CONTIGUOUS'] if shaped: if repeat > 1: for i in range(repeat): assert_array_equal(image[i], data) assert_array_equal(zimage[i], data) else: assert_array_equal(image, data) assert_array_equal(zimage, data) else: empty = numpy.empty((0, 0), dtype) if repeat > 1: for i in range(repeat): assert_array_equal(image[i], empty) assert_array_equal(zimage[i], empty) else: assert_array_equal(image.squeeze(), empty) # assert_array_equal(zimage.squeeze(), empty) if repeat > 1: assert image.shape[0] == repeat assert zimage.shape[0] == repeat elif shaped: assert shape == image.shape assert shape == zimage.shape else: assert image.shape == (0, 0) assert zimage.shape == (0, 0) assert dtype == image.dtype assert dtype == zimage.dtype @pytest.mark.parametrize('repeats', [1, 2]) @pytest.mark.parametrize('series', [1, 2]) @pytest.mark.parametrize('subifds', [0, 1, 2]) @pytest.mark.parametrize('compressed', [False, True]) @pytest.mark.parametrize('tiled', [False, True]) @pytest.mark.parametrize('ome', [False, True]) def test_write_subidfs(ome, tiled, compressed, series, repeats, subifds): """Test write SubIFDs.""" # use BigTIFF to prevent Windows explorer from locking the file if repeats > 1 and (compressed or tiled or ome): pytest.xfail('contiguous not working with compression, tiles, ome') data = [ (numpy.random.rand(5, 64, 64) * 1023).astype(numpy.uint16), (numpy.random.rand(5, 32, 32) * 1023).astype(numpy.uint16), (numpy.random.rand(5, 16, 16) * 1023).astype(numpy.uint16), ] kwargs = { 'tile': (16, 16) if tiled else None, 'compression': COMPRESSION.ADOBE_DEFLATE if compressed else None, 'compressionargs': {'level': 6} if compressed else None, } with TempFileName( 'write_subidfs_' f'{ome}-{tiled}-{compressed}-{subifds}-{series}-{repeats}' ) as fname: with TiffWriter(fname, ome=ome, bigtiff=True) as tif: for _ in range(series): for r in range(repeats): kwargs['contiguous'] = r != 0 tif.write(data[0], subifds=subifds, **kwargs) for i in range(1, subifds + 1): for r in range(repeats): kwargs['contiguous'] = r != 0 tif.write(data[i], subfiletype=1, **kwargs) with TiffFile(fname) as tif: for i, page in enumerate(tif.pages): assert not page.is_subifd if i % (5 * repeats): assert page.description == '' elif ome: if i == 0: assert page.is_ome else: assert page.description == '' else: assert page.is_shaped assert_array_equal(page.asarray(), data[0][i % 5]) assert_aszarr_method(page, data[0][i % 5]) if subifds: assert len(page.pages) == subifds for j, subifd in enumerate(page.pages): assert subifd.is_subifd assert_array_equal( subifd.asarray(), data[j + 1][i % 5] ) assert_aszarr_method(subifd, data[j + 1][i % 5]) else: assert page.pages is None for i, page in enumerate(tif.pages[:-1]): assert page._nextifd() == tif.pages[i + 1].offset if subifds: for j, subifd in enumerate(page.pages[:-1]): assert subifd.is_subifd assert subifd.subfiletype == FILETYPE.REDUCEDIMAGE assert subifd._nextifd() == page.subifds[j + 1] assert page.pages[-1]._nextifd() == 0 else: assert page.pages is None assert len(tif.series) == series if repeats > 1: for s in range(series): assert tif.series[s].kind == 'ome' if ome else 'shaped' assert_array_equal(tif.series[s].asarray()[0], data[0]) for i in range(subifds): assert_array_equal( tif.series[s].levels[i + 1].asarray()[0], data[i + 1], ) else: for s in range(series): assert tif.series[s].kind == 'ome' if ome else 'shaped' assert_array_equal(tif.series[s].asarray(), data[0]) for i in range(subifds): assert_array_equal( tif.series[s].levels[i + 1].asarray(), data[i + 1] ) def test_write_lists(): """Test write lists.""" array = numpy.arange(1000).reshape(10, 10, 10).astype(numpy.uint16) data = array.tolist() with TempFileName('write_lists') as fname: with TiffWriter(fname) as tif: tif.write(data, dtype=numpy.uint16) tif.write(data, compression=COMPRESSION.ADOBE_DEFLATE) tif.write([100.0]) with pytest.warns(UserWarning): tif.write([]) with TiffFile(fname) as tif: assert_array_equal(tif.series[0].asarray(), array) assert_array_equal(tif.series[1].asarray(), array) assert_array_equal(tif.series[2].asarray(), [100.0]) assert_array_equal(tif.series[3].asarray(), []) assert_aszarr_method(tif.series[0], array) assert_aszarr_method(tif.series[1], array) assert_aszarr_method(tif.series[2], [100.0]) # assert_aszarr_method(tif.series[3], []) def test_write_nopages(): """Test write TIFF with no pages.""" with TempFileName('write_nopages') as fname: with TiffWriter(fname) as tif: pass with TiffFile(fname) as tif: assert len(tif.pages) == 0 tif.asarray() if not SKIP_VALIDATE: with pytest.raises(ValueError): assert_valid_tiff(fname) def test_write_append_not_exists(): """Test append to non existing file.""" with TempFileName('write_append_not_exists.bin') as fname: # with self.assertRaises(ValueError): with TiffWriter(fname, append=True): pass def test_write_append_nontif(): """Test fail to append to non-TIFF file.""" with TempFileName('write_append_nontif.bin') as fname: with open(fname, 'wb') as fh: fh.write(b'not a TIFF file') with pytest.raises(TiffFileError): with TiffWriter(fname, append=True): pass @pytest.mark.skipif(SKIP_PRIVATE, reason=REASON) def test_write_append_lsm(): """Test fail to append to LSM file.""" fname = private_file('lsm/take1.lsm') with pytest.raises(ValueError): with TiffWriter(fname, append=True): pass @pytest.mark.skipif(SKIP_PRIVATE, reason=REASON) def test_write_append_create(): """Test append to non-existing file.""" with TempFileName('write_append_create') as fname: if os.path.exists(fname): os.remove(fname) with TiffWriter(fname, append=True) as tif: tif.write(shape=(31, 33), dtype=numpy.uint8) data = imread(fname) assert data.shape == (31, 33) def test_write_append_imwrite(): """Test append using imwrite.""" data = random_data(numpy.uint8, (21, 31)) with TempFileName('write_imwrite_append') as fname: imwrite(fname, data, metadata=None) for _ in range(3): imwrite(fname, data, append=True, metadata=None) a = imread(fname) assert a.shape == (4, 21, 31) assert_array_equal(a[3], data) def test_write_append(): """Test append to existing TIFF file.""" data = random_data(numpy.uint8, (21, 31)) with TempFileName('write_append') as fname: with TiffWriter(fname) as tif: pass with TiffFile(fname) as tif: assert len(tif.pages) == 0 assert__str__(tif) with TiffWriter(fname, append=True) as tif: tif.write(data) with TiffFile(fname) as tif: assert len(tif.series) == 1 assert len(tif.pages) == 1 page = tif.pages.first assert page.imagewidth == 31 assert page.imagelength == 21 assert__str__(tif) with TiffWriter(fname, append=True) as tif: tif.write(data) tif.write(data, contiguous=True) with TiffFile(fname) as tif: assert len(tif.series) == 2 assert len(tif.pages) == 3 page = tif.pages.first assert page.imagewidth == 31 assert page.imagelength == 21 assert_array_equal(tif.asarray(series=1)[1], data) assert__str__(tif) assert_valid_tiff(fname) def test_write_append_bytesio(): """Test append to existing TIFF file in BytesIO.""" data = random_data(numpy.uint8, (21, 31)) offset = 11 file = BytesIO() file.write(b'a' * offset) with TiffWriter(file) as tif: pass file.seek(offset) with TiffFile(file) as tif: assert len(tif.pages) == 0 file.seek(offset) with TiffWriter(file, append=True) as tif: tif.write(data) file.seek(offset) with TiffFile(file) as tif: assert len(tif.series) == 1 assert len(tif.pages) == 1 page = tif.pages.first assert page.imagewidth == 31 assert page.imagelength == 21 assert__str__(tif) file.seek(offset) with TiffWriter(file, append=True) as tif: tif.write(data) tif.write(data, contiguous=True) file.seek(offset) with TiffFile(file) as tif: assert len(tif.series) == 2 assert len(tif.pages) == 3 page = tif.pages.first assert page.imagewidth == 31 assert page.imagelength == 21 assert_array_equal(tif.asarray(series=1)[1], data) assert__str__(tif) @pytest.mark.skipif(SKIP_PUBLIC or SKIP_CODECS, reason=REASON) def test_write_roundtrip_filename(): """Test write and read using file name.""" data = imread(public_file('tifffile/generic_series.tif')) with TempFileName('write_roundtrip_filename') as fname: imwrite(fname, data, photometric=PHOTOMETRIC.RGB) assert_array_equal(imread(fname), data) @pytest.mark.skipif(SKIP_PUBLIC or SKIP_CODECS, reason=REASON) def test_write_roundtrip_openfile(): """Test write and read using open file.""" pad = b'0' * 7 data = imread(public_file('tifffile/generic_series.tif')) with TempFileName('write_roundtrip_openfile') as fname: with open(fname, 'wb') as fh: fh.write(pad) imwrite(fh, data, photometric=PHOTOMETRIC.RGB) fh.write(pad) with open(fname, 'rb') as fh: fh.seek(len(pad)) assert_array_equal(imread(fh), data) @pytest.mark.skipif(SKIP_PUBLIC or SKIP_CODECS, reason=REASON) def test_write_roundtrip_bytesio(): """Test write and read using BytesIO.""" pad = b'0' * 7 data = imread(public_file('tifffile/generic_series.tif')) buf = BytesIO() buf.write(pad) imwrite(buf, data, photometric=PHOTOMETRIC.RGB) buf.write(pad) buf.seek(len(pad)) assert_array_equal(imread(buf), data) def test_write_pages(): """Test write tags for contiguous data in all pages.""" data = random_data(numpy.float32, (17, 219, 301)) with TempFileName('write_pages') as fname: imwrite(fname, data, photometric=PHOTOMETRIC.MINISBLACK) assert_valid_tiff(fname) # assert file with TiffFile(fname) as tif: assert len(tif.pages) == 17 for i, page in enumerate(tif.pages): assert page.is_contiguous assert page.planarconfig == PLANARCONFIG.CONTIG assert page.photometric == PHOTOMETRIC.MINISBLACK assert page.imagewidth == 301 assert page.imagelength == 219 assert page.samplesperpixel == 1 image = page.asarray() assert_array_equal(data[i], image) # assert series series = tif.series[0] assert series.kind == 'shaped' assert series.dataoffset is not None image = series.asarray() assert_array_equal(data, image) assert__str__(tif) def test_write_truncate(): """Test only one page is written for truncated files.""" shape = (4, 5, 6, 1) with TempFileName('write_truncate') as fname: imwrite(fname, random_data(numpy.uint8, shape), truncate=True) assert_valid_tiff(fname) with TiffFile(fname) as tif: assert len(tif.pages) == 1 # not 4 page = tif.pages.first assert page.is_shaped assert page.shape == (5, 6) assert '"shape": [4, 5, 6, 1]' in page.description assert '"truncated": true' in page.description series = tif.series[0] assert series.is_truncated assert series.kind == 'shaped' assert series.shape == shape assert len(series._pages) == 1 assert len(series.pages) == 1 data = tif.asarray() assert data.shape == shape assert_aszarr_method(tif, data) assert_aszarr_method(tif, data, chunkmode='page') assert__str__(tif) def test_write_is_shaped(): """Test files are written with shape.""" with TempFileName('write_is_shaped') as fname: imwrite( fname, random_data(numpy.uint8, (4, 5, 6, 3)), photometric=PHOTOMETRIC.RGB, ) assert_valid_tiff(fname) with TiffFile(fname) as tif: assert len(tif.pages) == 4 page = tif.pages.first assert page.is_shaped assert page.description == '{"shape": [4, 5, 6, 3]}' assert__str__(tif) with TempFileName('write_is_shaped_with_description') as fname: descr = 'test is_shaped_with_description' imwrite( fname, random_data(numpy.uint8, (5, 6, 3)), photometric=PHOTOMETRIC.RGB, description=descr, ) assert_valid_tiff(fname) with TiffFile(fname) as tif: assert len(tif.pages) == 1 page = tif.pages.first assert page.is_shaped assert page.description == descr assert_aszarr_method(page) assert_aszarr_method(page, chunkmode='page') assert__str__(tif) def test_write_bytes_str(): """Test write bytes in place of 7-bit ascii string.""" micron = b'micron \xb5' # can't be encoded as 7-bit ascii data = numpy.arange(4, dtype=numpy.uint32).reshape((2, 2)) with TempFileName('write_bytes_str') as fname: imwrite( fname, data, description=micron, software=micron, extratags=[(50001, 's', 8, micron, True)], ) with TiffFile(fname) as tif: page = tif.pages.first assert page.description == 'micron \xb5' assert page.software == 'micron \xb5' assert page.tags[50001].value == 'micron \xb5' def test_write_extratags(): """Test write extratags.""" data = random_data(numpy.uint8, (2, 219, 301)) description = 'Created by TestTiffWriter\nLorem ipsum dolor...' pagename = 'Page name' extratags = [ ('ImageDescription', 's', 0, description, True), ('PageName', 's', 0, pagename, False), (50001, 'b', 1, b'1', True), (50002, 'b', 2, b'12', True), (50004, 'b', 4, b'1234', True), (50008, 'B', 8, b'12345678', True), ] with TempFileName('write_extratags') as fname: imwrite(fname, data, extratags=extratags) assert_valid_tiff(fname) with TiffFile(fname) as tif: assert len(tif.pages) == 2 assert tif.pages.first.description1 == description assert 'ImageDescription' not in tif.pages[1].tags assert tif.pages.first.tags['PageName'].value == pagename assert tif.pages[1].tags['PageName'].value == pagename assert '50001' not in tif.pages[1].tags tags = tif.pages.first.tags assert tags['50001'].value == 49 assert tags['50002'].value == (49, 50) assert tags['50004'].value == (49, 50, 51, 52) assert_array_equal(tags['50008'].value, b'12345678') # (49, 50, 51, 52, 53, 54, 55, 56)) assert__str__(tif) def test_write_double_tags(): """Test write single and sequences of doubles.""" # older versions of tifffile do not use offset to write doubles # reported by Eric Prestat on Feb 21, 2016 data = random_data(numpy.uint8, (8, 8)) value = math.pi extratags = [ (34563, 'd', 1, value, False), (34564, 'd', 1, (value,), False), (34565, 'd', 2, (value, value), False), (34566, 'd', 2, [value, value], False), (34567, 'd', 2, numpy.array((value, value)), False), ] with TempFileName('write_double_tags') as fname: imwrite(fname, data, extratags=extratags) assert_valid_tiff(fname) with TiffFile(fname) as tif: assert len(tif.pages) == 1 tags = tif.pages.first.tags assert tags['34563'].value == value assert tags['34564'].value == value assert tuple(tags['34565'].value) == (value, value) assert tuple(tags['34566'].value) == (value, value) assert tuple(tags['34567'].value) == (value, value) assert__str__(tif) with TempFileName('write_double_tags_bigtiff') as fname: imwrite(fname, data, bigtiff=True, extratags=extratags) # assert_jhove(fname) with TiffFile(fname) as tif: assert len(tif.pages) == 1 tags = tif.pages.first.tags assert tags['34563'].value == value assert tags['34564'].value == value assert tuple(tags['34565'].value) == (value, value) assert tuple(tags['34566'].value) == (value, value) assert tuple(tags['34567'].value) == (value, value) assert__str__(tif) def test_write_short_tags(): """Test write single and sequences of words.""" data = random_data(numpy.uint8, (8, 8)) value = 65531 extratags = [ (34564, 'H', 1, (value,) * 1, False), (34565, 'H', 2, (value,) * 2, False), (34566, 'H', 3, (value,) * 3, False), (34567, 'H', 4, (value,) * 4, False), ] with TempFileName('write_short_tags') as fname: imwrite(fname, data, extratags=extratags) assert_valid_tiff(fname) with TiffFile(fname) as tif: assert len(tif.pages) == 1 tags = tif.pages.first.tags assert tags['34564'].value == value assert tuple(tags['34565'].value) == (value,) * 2 assert tuple(tags['34566'].value) == (value,) * 3 assert tuple(tags['34567'].value) == (value,) * 4 assert__str__(tif) @pytest.mark.parametrize('subfiletype', [0b1, 0b10, 0b100, 0b1000, 0b1111]) def test_write_subfiletype(subfiletype): """Test write subfiletype.""" data = random_data(numpy.uint8, (16, 16)) if subfiletype & 0b100: data = data.astype('bool') with TempFileName(f'write_subfiletype_{subfiletype}') as fname: imwrite(fname, data, subfiletype=subfiletype) assert_valid_tiff(fname) with TiffFile(fname) as tif: page = tif.pages.first assert page.subfiletype == subfiletype assert page.is_reduced == bool(subfiletype & 0b1) assert page.is_multipage == bool(subfiletype & 0b10) assert page.is_mask == bool(subfiletype & 0b100) assert page.is_mrc == bool(subfiletype & 0b1000) assert_array_equal(data, page.asarray()) assert__str__(tif) @pytest.mark.parametrize('dt', [None, True, datetime, '2019:01:30 04:05:37']) def test_write_datetime_tag(dt): """Test write datetime tag.""" arg = dt if dt is datetime: arg = datetime.datetime.now().replace(microsecond=0) data = random_data(numpy.uint8, (31, 32)) with TempFileName('write_datetime') as fname: imwrite(fname, data, datetime=arg) with TiffFile(fname) as tif: page = tif.pages.first if dt is None: assert 'DateTime' not in page.tags assert page.datetime is None elif dt is True: dt = datetime.datetime.now().strftime('%Y:%m:%d %H:') assert page.tags['DateTime'].value.startswith(dt) elif dt is datetime: assert page.tags['DateTime'].value == arg.strftime( '%Y:%m:%d %H:%M:%S' ) assert page.datetime == arg else: assert page.tags['DateTime'].value == dt assert page.datetime == datetime.datetime.strptime( dt, '%Y:%m:%d %H:%M:%S' ) assert__str__(tif) def test_write_software_tag(): """Test write Software tag.""" data = random_data(numpy.uint8, (2, 219, 301)) software = 'test_tifffile.py' with TempFileName('write_software_tag') as fname: imwrite(fname, data, software=software) assert_valid_tiff(fname) with TiffFile(fname) as tif: assert len(tif.pages) == 2 assert tif.pages.first.software == software assert 'Software' not in tif.pages[1].tags assert__str__(tif) def test_write_description_tag(): """Test write two description tags.""" data = random_data(numpy.uint8, (2, 219, 301)) description = 'Created by TestTiffWriter\nLorem ipsum dolor...' with TempFileName('write_description_tag') as fname: imwrite(fname, data, description=description) assert_valid_tiff(fname) with TiffFile(fname) as tif: assert len(tif.pages) == 2 assert tif.pages.first.description == description assert tif.pages.first.description1 == '{"shape": [2, 219, 301]}' assert 'ImageDescription' not in tif.pages[1].tags assert__str__(tif) def test_write_description_tag_nometadata(): """Test no JSON description is written with metatata=None.""" data = random_data(numpy.uint8, (2, 219, 301)) description = 'Created by TestTiffWriter\nLorem ipsum dolor...' with TempFileName('write_description_tag_nometadatan') as fname: imwrite(fname, data, description=description, metadata=None) assert_valid_tiff(fname) with TiffFile(fname) as tif: assert len(tif.pages) == 2 assert tif.pages.first.description == description assert 'ImageDescription' not in tif.pages[1].tags assert ( tif.pages.first.tags.get('ImageDescription', index=1) is None ) assert tif.series[0].kind == 'generic' assert__str__(tif) def test_write_description_tag_notshaped(): """Test no JSON description is written with shaped=False.""" data = random_data(numpy.uint8, (2, 219, 301)) description = 'Created by TestTiffWriter\nLorem ipsum dolor...' with TempFileName('write_description_tag_notshaped') as fname: imwrite(fname, data, description=description, shaped=False) assert_valid_tiff(fname) with TiffFile(fname) as tif: assert len(tif.pages) == 2 assert tif.pages.first.description == description assert 'ImageDescription' not in tif.pages[1].tags assert ( tif.pages.first.tags.get('ImageDescription', index=1) is None ) assert tif.series[0].kind == 'generic' assert__str__(tif) def test_write_description_ome(): """Test write multiple imagedescription tags to OME.""" # https://forum.image.sc/t/79471 with TempFileName('write_description_ome') as fname: with pytest.warns(UserWarning): imwrite( fname, shape=(2, 32, 32), dtype=numpy.uint8, ome=True, description='description', # not written extratags=[('ImageDescription', 2, None, 'extratags', False)], ) with TiffFile(fname) as tif: assert tif.is_ome page = tif.pages.first assert page.description.startswith(' 4 else None): if i > 4: with pytest.raises(TypeError): imwrite(fname, data, **kwargs) kwargs['compression'] = compressionargs[0] if len(compressionargs) > 1: kwargs['compressionargs'] = { 'level': compressionargs[1], **compressionargs[2], } imwrite(fname, data, **kwargs) else: imwrite(fname, data, **kwargs) assert_valid_tiff(fname) with TiffFile(fname) as tif: assert len(tif.pages) == 1 page = tif.pages.first assert page.compression == ( COMPRESSION.ADOBE_DEFLATE if compressed else COMPRESSION.NONE ) assert page.planarconfig == PLANARCONFIG.SEPARATE assert page.photometric == PHOTOMETRIC.RGB assert page.imagewidth == 301 assert page.imagelength == 219 assert page.samplesperpixel == 3 assert len(page.dataoffsets) == (9 if compressed else 3) image = tif.asarray() assert_array_equal(data, image) assert__str__(tif) @pytest.mark.parametrize( 'args', [ (0, 0), (1, 5), (2, COMPRESSION.ADOBE_DEFLATE), (3, COMPRESSION.ADOBE_DEFLATE, 5), ], ) def test_write_compress_args(args): """Test compress parameter no longer works in 2022.4.22.""" i = args[0] compressargs = args[1:] if len(compressargs) == 1: compressargs = compressargs[0] data = WRITE_DATA with TempFileName(f'write_compression_args_{i}') as fname: with pytest.raises(TypeError): imwrite( fname, data, compress=compressargs, photometric=PHOTOMETRIC.RGB ) def test_write_compression_none(): """Test write compression=0.""" data = WRITE_DATA with TempFileName('write_compression_none') as fname: imwrite(fname, data, compression=0, photometric=PHOTOMETRIC.RGB) assert_valid_tiff(fname) with TiffFile(fname) as tif: assert len(tif.pages) == 1 page = tif.pages.first assert page.is_contiguous assert page.compression == COMPRESSION.NONE assert page.planarconfig == PLANARCONFIG.SEPARATE assert page.photometric == PHOTOMETRIC.RGB assert page.imagewidth == 301 assert page.imagelength == 219 assert page.samplesperpixel == 3 assert len(page.dataoffsets) == 3 image = tif.asarray() assert_array_equal(data, image) assert_aszarr_method(tif, image) assert__str__(tif) # @pytest.mark.parametrize('optimize', [None, False, True]) # @pytest.mark.parametrize('smoothing', [None, 10]) @pytest.mark.skipif( SKIP_PUBLIC or SKIP_CODECS or not imagecodecs.JPEG.available, reason=REASON ) @pytest.mark.parametrize('subsampling', ['444', '422', '420', '411']) @pytest.mark.parametrize('dtype', [numpy.uint8, numpy.uint16]) def test_write_compression_jpeg(dtype, subsampling): """Test write JPEG compression with subsampling.""" dtype = numpy.dtype(dtype) filename = f'write_compression_jpeg_{dtype}_{subsampling}' subsampling, atol = { '444': [(1, 1), 5], '422': [(2, 1), 10], '420': [(2, 2), 20], '411': [(4, 1), 40], }[subsampling] data = numpy.load(public_file('tifffile/rgb.u1.npy')).astype(dtype) data = data[:32, :16].copy() # make divisible by subsamples with TempFileName(filename) as fname: imwrite( fname, data, compression=COMPRESSION.JPEG, compressionargs={'level': 99}, subsampling=subsampling, photometric=PHOTOMETRIC.RGB, ) assert_valid_tiff(fname) with TiffFile(fname) as tif: assert len(tif.pages) == 1 page = tif.pages.first assert not page.is_contiguous if subsampling[0] > 1: assert page.is_subsampled assert page.tags['YCbCrSubSampling'].value == subsampling assert page.compression == COMPRESSION.JPEG assert page.photometric == PHOTOMETRIC.YCBCR assert page.imagewidth == data.shape[1] assert page.imagelength == data.shape[0] assert page.samplesperpixel == 3 image = tif.asarray() assert_allclose(data, image, atol=atol) assert_aszarr_method(tif, image) assert__str__(tif) @pytest.mark.parametrize('dtype', [numpy.uint8, bool]) def test_write_compression_deflate(dtype): """Test write ZLIB compression.""" dtype = numpy.dtype(dtype) data = WRITE_DATA.astype(dtype) with TempFileName(f'write_compression_deflate_{dtype}') as fname: imwrite( fname, data, compression=COMPRESSION.DEFLATE, compressionargs={'level': 6}, photometric=PHOTOMETRIC.RGB, rowsperstrip=108, ) assert_valid_tiff(fname) with TiffFile(fname) as tif: assert len(tif.pages) == 1 page = tif.pages.first assert not page.is_contiguous assert page.compression == COMPRESSION.DEFLATE assert page.planarconfig == PLANARCONFIG.SEPARATE assert page.photometric == PHOTOMETRIC.RGB assert page.imagewidth == 301 assert page.imagelength == 219 assert page.samplesperpixel == 3 assert page.rowsperstrip == 108 assert len(page.dataoffsets) == 9 image = tif.asarray() assert_array_equal(data, image) assert_aszarr_method(tif, image) assert__str__(tif) def test_write_compression_deflate_level(): """Test write ZLIB compression with level.""" data = WRITE_DATA with TempFileName('write_compression_deflate_level') as fname: imwrite( fname, data, compression=COMPRESSION.ADOBE_DEFLATE, compressionargs={'level': 9}, photometric=PHOTOMETRIC.RGB, ) assert_valid_tiff(fname) with TiffFile(fname) as tif: assert len(tif.pages) == 1 page = tif.pages.first assert not page.is_contiguous assert page.compression == COMPRESSION.ADOBE_DEFLATE assert page.planarconfig == PLANARCONFIG.SEPARATE assert page.photometric == PHOTOMETRIC.RGB assert page.imagewidth == 301 assert page.imagelength == 219 assert page.samplesperpixel == 3 image = tif.asarray() assert_array_equal(data, image) assert_aszarr_method(tif, image) assert__str__(tif) @pytest.mark.parametrize('dtype', [numpy.uint8, bool]) def test_write_compression_lzma(dtype): """Test write LZMA compression.""" dtype = numpy.dtype(dtype) data = WRITE_DATA.astype(dtype) with TempFileName(f'write_compression_lzma_{dtype}') as fname: imwrite( fname, data, compression=COMPRESSION.LZMA, photometric=PHOTOMETRIC.RGB, rowsperstrip=108, ) assert_valid_tiff(fname) with TiffFile(fname) as tif: assert len(tif.pages) == 1 page = tif.pages.first assert not page.is_contiguous assert page.compression == COMPRESSION.LZMA assert page.planarconfig == PLANARCONFIG.SEPARATE assert page.photometric == PHOTOMETRIC.RGB assert page.imagewidth == 301 assert page.imagelength == 219 assert page.samplesperpixel == 3 assert page.rowsperstrip == 108 assert len(page.dataoffsets) == 9 image = tif.asarray() assert_array_equal(data, image) assert_aszarr_method(tif, image) assert__str__(tif) @pytest.mark.skipif( SKIP_CODECS or not imagecodecs.ZSTD.available, reason=REASON ) @pytest.mark.parametrize('dtype', [numpy.uint8, bool]) def test_write_compression_zstd(dtype): """Test write ZSTD compression.""" dtype = numpy.dtype(dtype) data = WRITE_DATA.astype(dtype) with TempFileName(f'write_compression_zstd_{dtype}') as fname: imwrite( fname, data, compression=COMPRESSION.ZSTD, photometric=PHOTOMETRIC.RGB, rowsperstrip=108, ) assert_valid_tiff(fname) with TiffFile(fname) as tif: assert len(tif.pages) == 1 page = tif.pages.first assert not page.is_contiguous assert page.compression == COMPRESSION.ZSTD assert page.planarconfig == PLANARCONFIG.SEPARATE assert page.photometric == PHOTOMETRIC.RGB assert page.imagewidth == 301 assert page.imagelength == 219 assert page.samplesperpixel == 3 assert page.rowsperstrip == 108 assert len(page.dataoffsets) == 9 image = tif.asarray() assert_array_equal(data, image) assert_aszarr_method(tif, image) assert__str__(tif) @pytest.mark.skipif( SKIP_CODECS or not imagecodecs.WEBP.available, reason=REASON ) def test_write_compression_webp(): """Test write WEBP compression.""" data = WRITE_DATA.astype(numpy.uint8).reshape((219, 301, 3)) with TempFileName('write_compression_webp') as fname: imwrite( fname, data, compression=COMPRESSION.WEBP, compressionargs={'level': -1}, photometric=PHOTOMETRIC.RGB, ) assert_valid_tiff(fname) with TiffFile(fname) as tif: assert len(tif.pages) == 1 page = tif.pages.first assert not page.is_contiguous assert page.compression == COMPRESSION.WEBP assert page.photometric == PHOTOMETRIC.RGB assert page.imagewidth == 301 assert page.imagelength == 219 assert page.samplesperpixel == 3 image = tif.asarray() assert_array_equal(data, image) assert_aszarr_method(tif, image) assert__str__(tif) @pytest.mark.skipif( SKIP_CODECS or not imagecodecs.JPEG2K.available, reason=REASON ) def test_write_compression_jpeg2k(): """Test write JPEG 2000 compression.""" data = WRITE_DATA.astype(numpy.uint8).reshape((219, 301, 3)) with TempFileName('write_compression_jpeg2k') as fname: imwrite( fname, data, compression=COMPRESSION.JPEG2000, compressionargs={'level': -1}, photometric=PHOTOMETRIC.RGB, ) assert_valid_tiff(fname) with TiffFile(fname) as tif: assert len(tif.pages) == 1 page = tif.pages.first assert not page.is_contiguous assert page.compression == COMPRESSION.JPEG2000 assert page.photometric == PHOTOMETRIC.RGB assert page.imagewidth == 301 assert page.imagelength == 219 assert page.samplesperpixel == 3 image = tif.asarray() assert_array_equal(data, image) assert_aszarr_method(tif, image) assert__str__(tif) @pytest.mark.skipif( SKIP_CODECS or not imagecodecs.JPEGXL.available, reason=REASON ) def test_write_compression_jpegxl(): """Test write JPEG XL compression.""" data = WRITE_DATA.astype(numpy.uint8).reshape((219, 301, 3)) with TempFileName('write_compression_jpegxl') as fname: imwrite( fname, data, compression=COMPRESSION.JPEGXL, compressionargs={'level': 101}, # lossless photometric=PHOTOMETRIC.RGB, ) assert_valid_tiff(fname) with TiffFile(fname) as tif: assert len(tif.pages) == 1 page = tif.pages.first assert not page.is_contiguous assert page.compression == COMPRESSION.JPEGXL assert page.photometric == PHOTOMETRIC.RGB assert page.imagewidth == 301 assert page.imagelength == 219 assert page.samplesperpixel == 3 image = tif.asarray() assert_array_equal(data, image) assert_aszarr_method(tif, image) assert__str__(tif) @pytest.mark.skipif(SKIP_CODECS, reason=REASON) @pytest.mark.parametrize('compression', [None, 'deflate', 'zstd']) def test_write_compression_lerc(compression): """Test write LERC compression.""" if not hasattr(imagecodecs, 'LERC'): pytest.skip('LERC codec missing') compressionargs = { None: None, 'deflate': {'level': 7}, 'zstd': {'level': 10}, }[compression] lercparameters = { None: (4, 0), 'deflate': (4, 1), 'zstd': (4, 2), }[compression] data = WRITE_DATA.astype(numpy.uint16).reshape((219, 301, 3)) with TempFileName(f'write_compression_lerc_{compression}') as fname: imwrite( fname, data, photometric=PHOTOMETRIC.RGB, compression=COMPRESSION.LERC, compressionargs={ 'compression': compression, 'compressionargs': compressionargs, }, ) assert_valid_tiff(fname) with TiffFile(fname) as tif: assert len(tif.pages) == 1 page = tif.pages.first assert not page.is_contiguous assert page.compression == COMPRESSION.LERC assert page.photometric == PHOTOMETRIC.RGB assert page.imagewidth == 301 assert page.imagelength == 219 assert page.samplesperpixel == 3 assert page.tags['LercParameters'].value == lercparameters image = tif.asarray() assert_array_equal(data, image) assert_aszarr_method(tif, image) assert__str__(tif) @pytest.mark.skipif(SKIP_PUBLIC or SKIP_CODECS, reason=REASON) def test_write_compression_jetraw(): """Test write Jetraw compression.""" try: have_jetraw = imagecodecs.JETRAW.available except AttributeError: # requires imagecodecs > 2022.22.2 have_jetraw = False if not have_jetraw: pytest.skip('Jetraw codec not available') data = imread(private_file('jetraw/16ms-1.tif')) assert data.dtype == numpy.uint16 assert data.shape == (2304, 2304) assert data[1490, 1830] == 36701 # Jetraw requires initialization imagecodecs.jetraw_init() with TempFileName('write_compression_jetraw') as fname: try: imwrite( fname, data, compression=COMPRESSION.JETRAW, compressionargs={'identifier': '500202_standard_bin1x'}, ) except imagecodecs.JetrawError as exc: if 'license' in str(exc): pytest.skip('Jetraw_encode requires a license') else: raise exc with TiffFile(fname) as tif: assert len(tif.pages) == 1 page = tif.pages.first assert page.compression == COMPRESSION.JETRAW assert page.photometric == PHOTOMETRIC.MINISBLACK assert page.planarconfig == PLANARCONFIG.CONTIG assert page.imagewidth == 2304 assert page.imagelength == 2304 assert page.rowsperstrip == 2304 assert page.bitspersample == 16 assert page.samplesperpixel == 1 image = tif.asarray() assert 0.5 > numpy.mean( image.astype(numpy.float32) - data.astype(numpy.float32) ) assert__str__(tif) @pytest.mark.skipif(SKIP_CODECS, reason=REASON) @pytest.mark.parametrize('dtype', [numpy.int8, numpy.uint8, bool]) @pytest.mark.parametrize('tile', [None, (16, 16)]) def test_write_compression_packbits(dtype, tile): """Test write PackBits compression.""" dtype = numpy.dtype(dtype) uncompressed = numpy.frombuffer( b'\xaa\xaa\xaa\x80\x00\x2a\xaa\xaa\xaa\xaa\x80\x00' b'\x2a\x22\xaa\xaa\xaa\xaa\xaa\xaa\xaa\xaa\xaa\xaa', dtype=dtype, ) shape = 2, 7, uncompressed.size data = numpy.empty(shape, dtype=dtype) data[..., :] = uncompressed with TempFileName(f'write_compression_packits_{dtype}') as fname: imwrite(fname, data, compression=COMPRESSION.PACKBITS, tile=tile) assert_valid_tiff(fname) with TiffFile(fname) as tif: assert len(tif.pages) == 2 page = tif.pages.first assert not page.is_contiguous assert page.compression == COMPRESSION.PACKBITS assert page.planarconfig == PLANARCONFIG.CONTIG assert page.imagewidth == uncompressed.size assert page.imagelength == 7 assert page.samplesperpixel == 1 image = tif.asarray() assert_array_equal(data, image) assert_aszarr_method(tif, image) assert__str__(tif) def test_write_compression_rowsperstrip(): """Test write rowsperstrip with compression.""" data = WRITE_DATA with TempFileName('write_compression_rowsperstrip') as fname: imwrite( fname, data, compression=COMPRESSION.ADOBE_DEFLATE, rowsperstrip=32, photometric=PHOTOMETRIC.RGB, ) assert_valid_tiff(fname) with TiffFile(fname) as tif: assert len(tif.pages) == 1 page = tif.pages.first assert not page.is_contiguous assert page.compression == COMPRESSION.ADOBE_DEFLATE assert page.planarconfig == PLANARCONFIG.SEPARATE assert page.photometric == PHOTOMETRIC.RGB assert page.imagewidth == 301 assert page.imagelength == 219 assert page.samplesperpixel == 3 assert page.rowsperstrip == 32 assert len(page.dataoffsets) == 21 image = tif.asarray() assert_array_equal(data, image) assert_aszarr_method(tif, image) assert__str__(tif) def test_write_compression_tiled(): """Test write compressed tiles.""" data = WRITE_DATA with TempFileName('write_compression_tiled') as fname: imwrite( fname, data, compression=COMPRESSION.ADOBE_DEFLATE, tile=(32, 32), photometric=PHOTOMETRIC.RGB, ) assert_valid_tiff(fname) with TiffFile(fname) as tif: assert len(tif.pages) == 1 page = tif.pages.first assert not page.is_contiguous assert page.is_tiled assert page.compression == COMPRESSION.ADOBE_DEFLATE assert page.planarconfig == PLANARCONFIG.SEPARATE assert page.photometric == PHOTOMETRIC.RGB assert page.imagewidth == 301 assert page.imagelength == 219 assert page.samplesperpixel == 3 assert len(page.dataoffsets) == 210 image = tif.asarray() assert_array_equal(data, image) assert_aszarr_method(tif, image) assert__str__(tif) def test_write_compression_predictor(): """Test write horizontal differencing.""" data = WRITE_DATA with TempFileName('write_compression_predictor') as fname: imwrite( fname, data, compression=COMPRESSION.ADOBE_DEFLATE, predictor=PREDICTOR.HORIZONTAL, photometric=PHOTOMETRIC.RGB, ) assert_valid_tiff(fname) with TiffFile(fname) as tif: assert len(tif.pages) == 1 page = tif.pages.first assert not page.is_contiguous assert page.compression == COMPRESSION.ADOBE_DEFLATE assert page.planarconfig == PLANARCONFIG.SEPARATE assert page.photometric == PHOTOMETRIC.RGB assert page.predictor == PREDICTOR.HORIZONTAL assert page.imagewidth == 301 assert page.imagelength == 219 assert page.samplesperpixel == 3 image = tif.asarray() assert_array_equal(data, image) assert_aszarr_method(tif, image) assert__str__(tif) @pytest.mark.skipif(SKIP_CODECS, reason=REASON) @pytest.mark.parametrize('dtype', [numpy.uint16, numpy.float32]) def test_write_compression_predictor_tiled(dtype): """Test write horizontal differencing with tiles.""" dtype = numpy.dtype(dtype) data = WRITE_DATA.astype(dtype) with TempFileName(f'write_compression_tiled_predictor_{dtype}') as fname: imwrite( fname, data, compression=COMPRESSION.ADOBE_DEFLATE, predictor=True, tile=(32, 32), photometric=PHOTOMETRIC.RGB, ) if dtype.kind != 'f': assert_valid_tiff(fname) with TiffFile(fname) as tif: assert len(tif.pages) == 1 page = tif.pages.first assert not page.is_contiguous assert page.is_tiled assert page.compression == COMPRESSION.ADOBE_DEFLATE assert page.planarconfig == PLANARCONFIG.SEPARATE assert page.photometric == PHOTOMETRIC.RGB assert page.imagewidth == 301 assert page.imagelength == 219 assert page.samplesperpixel == 3 assert page.predictor == 3 if dtype.kind == 'f' else 2 image = tif.asarray() assert_array_equal(data, image) assert_aszarr_method(tif, image) assert__str__(tif) def test_write_rowsperstrip(): """Test write rowsperstrip without compression.""" data = WRITE_DATA with TempFileName('write_rowsperstrip') as fname: imwrite( fname, data, rowsperstrip=32, contiguous=False, photometric=PHOTOMETRIC.RGB, metadata=None, ) assert_valid_tiff(fname) with TiffFile(fname) as tif: assert len(tif.pages) == 1 page = tif.pages.first assert page.is_contiguous assert page.planarconfig == PLANARCONFIG.SEPARATE assert page.photometric == PHOTOMETRIC.RGB assert page.imagewidth == 301 assert page.imagelength == 219 assert page.samplesperpixel == 3 assert page.rowsperstrip == 32 assert len(page.dataoffsets) == 21 stripbytecounts = page.tags['StripByteCounts'].value assert stripbytecounts[0] == 19264 assert stripbytecounts[6] == 16254 image = tif.asarray() assert_array_equal(data, image) assert_aszarr_method(tif, image) assert__str__(tif) @pytest.mark.skipif(IS_BE, reason=REASON) def test_write_bigendian(): """Test write big endian file.""" # also test memory mapping non-native byte order data = random_data(numpy.float32, (2, 3, 219, 301)) data = data.view(data.dtype.newbyteorder()) data = numpy.nan_to_num(data, copy=False) with TempFileName('write_bigendian') as fname: imwrite( fname, data, planarconfig=PLANARCONFIG.SEPARATE, photometric=PHOTOMETRIC.RGB, ) assert_valid_tiff(fname) with TiffFile(fname) as tif: assert len(tif.pages) == 2 assert len(tif.series) == 1 assert tif.byteorder == '>' # assert not tif.isnative assert tif.series[0].dataoffset is not None page = tif.pages.first assert page.is_contiguous assert page.planarconfig == PLANARCONFIG.SEPARATE assert page.photometric == PHOTOMETRIC.RGB assert page.imagewidth == 301 assert page.imagelength == 219 assert page.samplesperpixel == 3 # test read data image = tif.asarray() assert_array_equal(data, image) assert_aszarr_method(tif, image) image = page.asarray() assert_array_equal(data[0], image) # test direct memory mapping; returns big endian array image = tif.asarray(out='memmap') assert isinstance(image, numpy.memmap) assert image.dtype == numpy.dtype('>f4') assert_array_equal(data, image) del image image = page.asarray(out='memmap') assert isinstance(image, numpy.memmap) assert image.dtype == numpy.dtype('>f4') assert_array_equal(data[0], image) del image # test indirect memory mapping; returns native endian array image = tif.asarray(out='memmap:') assert isinstance(image, numpy.memmap) assert image.dtype == numpy.dtype('=f4') assert_array_equal(data, image) del image image = page.asarray(out='memmap:') assert isinstance(image, numpy.memmap) assert image.dtype == numpy.dtype('=f4') assert_array_equal(data[0], image) del image # test 2nd page page = tif.pages[1] image = page.asarray(out='memmap') assert isinstance(image, numpy.memmap) assert image.dtype == numpy.dtype('>f4') assert_array_equal(data[1], image) del image image = page.asarray(out='memmap:') assert isinstance(image, numpy.memmap) assert image.dtype == numpy.dtype('=f4') assert_array_equal(data[1], image) del image assert__str__(tif) def test_write_zero_size(): """Test write zero size array no longer fails.""" # with pytest.raises(ValueError): with pytest.warns(UserWarning): with TempFileName('write_empty') as fname: imwrite(fname, numpy.empty(0)) def test_write_pixel(): """Test write single pixel.""" data = numpy.zeros(1, dtype=numpy.uint8) with TempFileName('write_pixel') as fname: imwrite(fname, data) assert_valid_tiff(fname) with TiffFile(fname) as tif: assert len(tif.pages) == 1 assert tif.series[0].axes == 'Y' page = tif.pages.first assert page.is_contiguous assert page.planarconfig == PLANARCONFIG.CONTIG assert page.photometric != PHOTOMETRIC.RGB assert page.imagewidth == 1 assert page.imagelength == 1 assert page.samplesperpixel == 1 image = tif.asarray() assert_array_equal(data, image) assert_aszarr_method(tif, image) assert_aszarr_method(tif, image, chunkmode='page') assert__str__(tif) def test_write_small(): """Test write small image.""" data = random_data(numpy.uint8, (1, 1)) with TempFileName('write_small') as fname: imwrite(fname, data) assert_valid_tiff(fname) with TiffFile(fname) as tif: assert len(tif.pages) == 1 page = tif.pages.first assert page.is_contiguous assert page.planarconfig == PLANARCONFIG.CONTIG assert page.photometric != PHOTOMETRIC.RGB assert page.imagewidth == 1 assert page.imagelength == 1 assert page.samplesperpixel == 1 image = tif.asarray() assert_array_equal(data, image) assert_aszarr_method(tif, image) assert__str__(tif) def test_write_2d_as_rgb(): """Test write RGB color palette as RGB image.""" # image length should be 1 data = numpy.arange(3 * 256, dtype=numpy.uint16).reshape(256, 3) // 3 with TempFileName('write_2d_as_rgb_contig') as fname: imwrite(fname, data, photometric=PHOTOMETRIC.RGB) assert_valid_tiff(fname) with TiffFile(fname) as tif: assert len(tif.pages) == 1 assert tif.series[0].axes == 'XS' page = tif.pages.first assert page.is_contiguous assert page.planarconfig == PLANARCONFIG.CONTIG assert page.photometric == PHOTOMETRIC.RGB assert page.imagewidth == 256 assert page.imagelength == 1 assert page.samplesperpixel == 3 image = tif.asarray() assert_array_equal(data, image) assert_aszarr_method(tif, image) assert_aszarr_method(tif, image, chunkmode='page') assert__str__(tif) def test_write_auto_photometric_planar(): """Test detect photometric in planar mode.""" data = random_data(numpy.uint8, (4, 31, 33)) with TempFileName('write_auto_photometric_planar1') as fname: with pytest.warns(DeprecationWarning): imwrite(fname, data) assert_valid_tiff(fname) with TiffFile(fname) as tif: assert len(tif.pages) == 1 page = tif.pages.first assert page.shape == (4, 31, 33) assert page.axes == 'SYX' assert page.planarconfig == PLANARCONFIG.SEPARATE assert page.photometric == PHOTOMETRIC.RGB assert len(page.extrasamples) == 1 with TempFileName('write_auto_photometric_planar2') as fname: with pytest.warns(DeprecationWarning): imwrite(fname, data, planarconfig='separate') assert_valid_tiff(fname) with TiffFile(fname) as tif: assert len(tif.pages) == 1 page = tif.pages.first assert page.shape == (4, 31, 33) assert page.axes == 'SYX' assert page.planarconfig == PLANARCONFIG.SEPARATE assert page.photometric == PHOTOMETRIC.RGB assert len(page.extrasamples) == 1 data = random_data(numpy.uint8, (4, 7, 31, 33)) with TempFileName('write_auto_photometric_volumetric_planar1') as fname: with pytest.warns(DeprecationWarning): imwrite(fname, data, volumetric=True) assert_valid_tiff(fname) with TiffFile(fname) as tif: assert len(tif.pages) == 1 page = tif.pages.first assert page.shape == (4, 7, 31, 33) assert page.axes == 'SZYX' assert page.planarconfig == PLANARCONFIG.SEPARATE assert page.photometric == PHOTOMETRIC.RGB assert len(page.extrasamples) == 1 assert page.is_volumetric with TempFileName('write_auto_photometric_volumetric_planar2') as fname: with pytest.warns(DeprecationWarning): imwrite(fname, data, planarconfig='separate', volumetric=True) assert_valid_tiff(fname) with TiffFile(fname) as tif: assert len(tif.pages) == 1 page = tif.pages.first assert page.shape == (4, 7, 31, 33) assert page.axes == 'SZYX' assert page.planarconfig == PLANARCONFIG.SEPARATE assert page.photometric == PHOTOMETRIC.RGB assert len(page.extrasamples) == 1 assert page.is_volumetric def test_write_auto_photometric_contig(): """Test detect photometric in contig mode.""" data = random_data(numpy.uint8, (31, 33, 4)) with TempFileName('write_auto_photometric_contig1') as fname: imwrite(fname, data) assert_valid_tiff(fname) with TiffFile(fname) as tif: assert len(tif.pages) == 1 page = tif.pages.first assert page.shape == (31, 33, 4) assert page.axes == 'YXS' assert page.planarconfig == PLANARCONFIG.CONTIG assert page.photometric == PHOTOMETRIC.RGB assert len(page.extrasamples) == 1 with TempFileName('write_auto_photometric_contig2') as fname: imwrite(fname, data, planarconfig='contig') assert_valid_tiff(fname) with TiffFile(fname) as tif: assert len(tif.pages) == 1 page = tif.pages.first assert page.shape == (31, 33, 4) assert page.axes == 'YXS' assert page.planarconfig == PLANARCONFIG.CONTIG assert page.photometric == PHOTOMETRIC.RGB assert len(page.extrasamples) == 1 data = random_data(numpy.uint8, (7, 31, 33, 4)) with TempFileName('write_auto_photometric_volumetric_contig1') as fname: imwrite(fname, data, volumetric=True) assert_valid_tiff(fname) with TiffFile(fname) as tif: assert len(tif.pages) == 1 page = tif.pages.first assert page.shape == (7, 31, 33, 4) assert page.axes == 'ZYXS' assert page.planarconfig == PLANARCONFIG.CONTIG assert page.photometric == PHOTOMETRIC.RGB assert len(page.extrasamples) == 1 assert page.is_volumetric with TempFileName('write_auto_photometric_volumetric_contig2') as fname: imwrite(fname, data, planarconfig='contig', volumetric=True) assert_valid_tiff(fname) with TiffFile(fname) as tif: assert len(tif.pages) == 1 page = tif.pages.first assert page.shape == (7, 31, 33, 4) assert page.axes == 'ZYXS' assert page.planarconfig == PLANARCONFIG.CONTIG assert page.photometric == PHOTOMETRIC.RGB assert len(page.extrasamples) == 1 assert page.is_volumetric def test_write_invalid_contig_rgb(): """Test write planar RGB with 2 samplesperpixel.""" data = random_data(numpy.uint8, (219, 301, 2)) with pytest.raises(ValueError): with TempFileName('write_invalid_contig_rgb') as fname: imwrite(fname, data, photometric=PHOTOMETRIC.RGB) # default to pages with TempFileName('write_invalid_contig_rgb_pages') as fname: imwrite(fname, data) assert_valid_tiff(fname) with TiffFile(fname) as tif: assert len(tif.pages) == 219 assert tif.series[0].axes == 'QYX' page = tif.pages.first assert page.is_contiguous assert page.planarconfig == PLANARCONFIG.CONTIG assert page.photometric != PHOTOMETRIC.RGB assert page.imagewidth == 2 assert page.imagelength == 301 assert page.samplesperpixel == 1 image = tif.asarray() assert_array_equal(data, image) assert_aszarr_method(tif, image) assert__str__(tif) # better save as contig samples with TempFileName('write_invalid_contig_rgb_samples') as fname: imwrite(fname, data, planarconfig=PLANARCONFIG.CONTIG) assert_valid_tiff(fname) with TiffFile(fname) as tif: assert len(tif.pages) == 1 assert tif.series[0].axes == 'YXS' page = tif.pages.first assert page.is_contiguous assert page.planarconfig == PLANARCONFIG.CONTIG assert page.photometric != PHOTOMETRIC.RGB assert page.imagewidth == 301 assert page.imagelength == 219 assert page.samplesperpixel == 2 image = tif.asarray() assert_array_equal(data, image) assert_aszarr_method(tif, image) assert__str__(tif) def test_write_invalid_planar_rgb(): """Test write planar RGB with 2 samplesperpixel.""" data = random_data(numpy.uint8, (2, 219, 301)) with pytest.raises(ValueError): with TempFileName('write_invalid_planar_rgb') as fname: imwrite( fname, data, photometric=PHOTOMETRIC.RGB, planarconfig=PLANARCONFIG.SEPARATE, ) # default to pages with TempFileName('write_invalid_planar_rgb_pages') as fname: imwrite(fname, data) assert_valid_tiff(fname) with TiffFile(fname) as tif: assert len(tif.pages) == 2 assert tif.series[0].axes == 'QYX' page = tif.pages.first assert page.is_contiguous assert page.planarconfig == PLANARCONFIG.CONTIG assert page.photometric != PHOTOMETRIC.RGB assert page.imagewidth == 301 assert page.imagelength == 219 assert page.samplesperpixel == 1 image = tif.asarray() assert_array_equal(data, image) assert_aszarr_method(tif, image) assert__str__(tif) # or save as planar samples with TempFileName('write_invalid_planar_rgb_samples') as fname: imwrite(fname, data, planarconfig=PLANARCONFIG.SEPARATE) assert_valid_tiff(fname) with TiffFile(fname) as tif: assert len(tif.pages) == 1 assert tif.series[0].axes == 'SYX' page = tif.pages.first assert page.is_contiguous assert page.planarconfig == PLANARCONFIG.SEPARATE assert page.photometric != PHOTOMETRIC.RGB assert page.imagewidth == 301 assert page.imagelength == 219 assert page.samplesperpixel == 2 image = tif.asarray() assert_array_equal(data, image) assert_aszarr_method(tif, image) assert__str__(tif) def test_write_extrasamples_gray(): """Test write grayscale with extrasamples contig.""" data = random_data(numpy.uint8, (301, 219, 2)) with TempFileName('write_extrasamples_gray') as fname: imwrite(fname, data, extrasamples=[EXTRASAMPLE.UNASSALPHA]) assert_valid_tiff(fname) with TiffFile(fname) as tif: assert len(tif.pages) == 1 page = tif.pages.first assert page.is_contiguous assert page.photometric == PHOTOMETRIC.MINISBLACK assert page.planarconfig == PLANARCONFIG.CONTIG assert page.imagewidth == 219 assert page.imagelength == 301 assert page.samplesperpixel == 2 assert page.extrasamples[0] == 2 image = tif.asarray() assert_array_equal(data, image) assert_aszarr_method(tif, image) assert__str__(tif) def test_write_extrasamples_gray_planar(): """Test write planar grayscale with extrasamples.""" data = random_data(numpy.uint8, (2, 301, 219)) with TempFileName('write_extrasamples_gray_planar') as fname: imwrite( fname, data, planarconfig=PLANARCONFIG.SEPARATE, extrasamples=[EXTRASAMPLE.UNASSALPHA], ) assert_valid_tiff(fname) with TiffFile(fname) as tif: assert len(tif.pages) == 1 page = tif.pages.first assert page.is_contiguous assert page.photometric == PHOTOMETRIC.MINISBLACK assert page.planarconfig == PLANARCONFIG.SEPARATE assert page.imagewidth == 219 assert page.imagelength == 301 assert page.samplesperpixel == 2 assert page.extrasamples[0] == 2 image = tif.asarray() assert_array_equal(data, image) assert_aszarr_method(tif, image) assert__str__(tif) def test_write_extrasamples_gray_mix(): """Test write grayscale with multiple extrasamples.""" data = random_data(numpy.uint8, (301, 219, 4)) with TempFileName('write_extrasamples_gray_mix') as fname: imwrite( fname, data, photometric=PHOTOMETRIC.MINISBLACK, extrasamples=[ EXTRASAMPLE.ASSOCALPHA, EXTRASAMPLE.UNASSALPHA, EXTRASAMPLE.UNSPECIFIED, ], ) assert_valid_tiff(fname) with TiffFile(fname) as tif: assert len(tif.pages) == 1 page = tif.pages.first assert page.is_contiguous assert page.photometric == PHOTOMETRIC.MINISBLACK assert page.imagewidth == 219 assert page.imagelength == 301 assert page.samplesperpixel == 4 assert page.extrasamples == (1, 2, 0) image = tif.asarray() assert_array_equal(data, image) assert_aszarr_method(tif, image) assert__str__(tif) def test_write_extrasamples_unspecified(): """Test write RGB with unspecified extrasamples by default.""" data = random_data(numpy.uint8, (301, 219, 5)) with TempFileName('write_extrasamples_unspecified') as fname: imwrite(fname, data, photometric=PHOTOMETRIC.RGB) assert_valid_tiff(fname) with TiffFile(fname) as tif: assert len(tif.pages) == 1 page = tif.pages.first assert page.is_contiguous assert page.photometric == PHOTOMETRIC.RGB assert page.imagewidth == 219 assert page.imagelength == 301 assert page.samplesperpixel == 5 assert page.extrasamples == (0, 0) image = tif.asarray() assert_array_equal(data, image) assert_aszarr_method(tif, image) assert__str__(tif) def test_write_extrasamples_assocalpha(): """Test write RGB with assocalpha extrasample.""" data = random_data(numpy.uint8, (219, 301, 4)) with TempFileName('write_extrasamples_assocalpha') as fname: imwrite( fname, data, photometric=PHOTOMETRIC.RGB, extrasamples=EXTRASAMPLE.ASSOCALPHA, ) assert_valid_tiff(fname) with TiffFile(fname) as tif: assert len(tif.pages) == 1 page = tif.pages.first assert page.is_contiguous assert page.planarconfig == PLANARCONFIG.CONTIG assert page.photometric == PHOTOMETRIC.RGB assert page.imagewidth == 301 assert page.imagelength == 219 assert page.samplesperpixel == 4 assert page.extrasamples[0] == 1 image = tif.asarray() assert_array_equal(data, image) assert_aszarr_method(tif, image) assert__str__(tif) def test_write_extrasamples_mix(): """Test write RGB with mixture of extrasamples.""" data = random_data(numpy.uint8, (219, 301, 6)) with TempFileName('write_extrasamples_mix') as fname: imwrite( fname, data, photometric=PHOTOMETRIC.RGB, extrasamples=[ EXTRASAMPLE.ASSOCALPHA, EXTRASAMPLE.UNASSALPHA, EXTRASAMPLE.UNSPECIFIED, ], ) assert_valid_tiff(fname) with TiffFile(fname) as tif: assert len(tif.pages) == 1 page = tif.pages.first assert page.is_contiguous assert page.planarconfig == PLANARCONFIG.CONTIG assert page.photometric == PHOTOMETRIC.RGB assert page.imagewidth == 301 assert page.imagelength == 219 assert page.samplesperpixel == 6 assert page.extrasamples == (1, 2, 0) image = tif.asarray() assert_array_equal(data, image) assert_aszarr_method(tif, image) assert__str__(tif) def test_write_extrasamples_contig(): """Test write contig grayscale with large number of extrasamples.""" data = random_data(numpy.uint8, (3, 219, 301)) with TempFileName('write_extrasamples_contig') as fname: imwrite(fname, data, planarconfig=PLANARCONFIG.CONTIG) assert_valid_tiff(fname) with TiffFile(fname) as tif: assert len(tif.pages) == 1 page = tif.pages.first assert page.is_contiguous assert page.planarconfig == PLANARCONFIG.CONTIG assert page.photometric != PHOTOMETRIC.RGB assert page.imagewidth == 219 assert page.imagelength == 3 assert page.samplesperpixel == 301 assert len(page.extrasamples) == 301 - 1 image = tif.asarray() assert_array_equal(data, image) assert_aszarr_method(tif, image) assert__str__(tif) # better save as RGB planar with TempFileName('write_extrasamples_contig_planar') as fname: imwrite( fname, data, photometric=PHOTOMETRIC.RGB, planarconfig=PLANARCONFIG.SEPARATE, ) assert_valid_tiff(fname) with TiffFile(fname) as tif: assert len(tif.pages) == 1 page = tif.pages.first assert page.is_contiguous assert page.planarconfig == PLANARCONFIG.SEPARATE assert page.photometric == PHOTOMETRIC.RGB assert page.imagewidth == 301 assert page.imagelength == 219 assert page.samplesperpixel == 3 image = tif.asarray() assert_array_equal(data, image) assert_aszarr_method(tif, image) assert__str__(tif) def test_write_extrasamples_contig_rgb2(): """Test write contig RGB with large number of extrasamples.""" data = random_data(numpy.uint8, (3, 219, 301)) with TempFileName('write_extrasamples_contig_rgb2') as fname: imwrite( fname, data, photometric=PHOTOMETRIC.RGB, planarconfig=PLANARCONFIG.CONTIG, ) assert_valid_tiff(fname) with TiffFile(fname) as tif: assert len(tif.pages) == 1 page = tif.pages.first assert page.is_contiguous assert page.planarconfig == PLANARCONFIG.CONTIG assert page.photometric == PHOTOMETRIC.RGB assert page.imagewidth == 219 assert page.imagelength == 3 assert page.samplesperpixel == 301 assert len(page.extrasamples) == 301 - 3 image = tif.asarray() assert_array_equal(data, image) assert_aszarr_method(tif, image) assert__str__(tif) # better save as planar with TempFileName('write_extrasamples_contig_rgb2_planar') as fname: imwrite( fname, data, photometric=PHOTOMETRIC.RGB, planarconfig=PLANARCONFIG.SEPARATE, ) assert_valid_tiff(fname) with TiffFile(fname) as tif: assert len(tif.pages) == 1 page = tif.pages.first assert page.is_contiguous assert page.planarconfig == PLANARCONFIG.SEPARATE assert page.photometric == PHOTOMETRIC.RGB assert page.imagewidth == 301 assert page.imagelength == 219 assert page.samplesperpixel == 3 image = tif.asarray() assert_array_equal(data, image) assert_aszarr_method(tif, image) assert__str__(tif) def test_write_extrasamples_planar(): """Test write planar large number of extrasamples.""" data = random_data(numpy.uint8, (219, 301, 3)) with TempFileName('write_extrasamples_planar') as fname: imwrite(fname, data, planarconfig=PLANARCONFIG.SEPARATE) assert_valid_tiff(fname) with TiffFile(fname) as tif: assert len(tif.pages) == 1 page = tif.pages.first assert page.is_contiguous assert page.planarconfig == PLANARCONFIG.SEPARATE assert page.photometric != PHOTOMETRIC.RGB assert page.imagewidth == 3 assert page.imagelength == 301 assert page.samplesperpixel == 219 assert len(page.extrasamples) == 219 - 1 image = tif.asarray() assert_array_equal(data, image) assert_aszarr_method(tif, image) assert__str__(tif) def test_write_extrasamples_planar_rgb2(): """Test write planar RGB with large number of extrasamples.""" data = random_data(numpy.uint8, (219, 301, 3)) with TempFileName('write_extrasamples_planar_rgb2') as fname: imwrite( fname, data, photometric=PHOTOMETRIC.RGB, planarconfig=PLANARCONFIG.SEPARATE, ) assert_valid_tiff(fname) with TiffFile(fname) as tif: assert len(tif.pages) == 1 page = tif.pages.first assert page.is_contiguous assert page.planarconfig == PLANARCONFIG.SEPARATE assert page.photometric == PHOTOMETRIC.RGB assert page.imagewidth == 3 assert page.imagelength == 301 assert page.samplesperpixel == 219 assert len(page.extrasamples) == 219 - 3 image = tif.asarray() assert_array_equal(data, image) assert_aszarr_method(tif, image) assert__str__(tif) def test_write_minisblack_planar(): """Test write planar minisblack.""" data = random_data(numpy.uint8, (3, 219, 301)) with TempFileName('write_minisblack_planar') as fname: imwrite(fname, data, photometric=PHOTOMETRIC.MINISBLACK) assert_valid_tiff(fname) with TiffFile(fname) as tif: assert len(tif.pages) == 3 page = tif.pages.first assert page.is_contiguous assert page.planarconfig == PLANARCONFIG.CONTIG assert page.photometric != PHOTOMETRIC.RGB assert page.imagewidth == 301 assert page.imagelength == 219 assert page.samplesperpixel == 1 image = tif.asarray() assert_array_equal(data, image) assert_aszarr_method(tif, image) assert__str__(tif) def test_write_minisblack_contig(): """Test write contig minisblack.""" data = random_data(numpy.uint8, (219, 301, 3)) with TempFileName('write_minisblack_contig') as fname: imwrite(fname, data, photometric=PHOTOMETRIC.MINISBLACK) assert_valid_tiff(fname) with TiffFile(fname) as tif: assert len(tif.pages) == 219 page = tif.pages.first assert page.is_contiguous assert page.planarconfig == PLANARCONFIG.CONTIG assert page.photometric != PHOTOMETRIC.RGB assert page.imagewidth == 3 assert page.imagelength == 301 assert page.samplesperpixel == 1 image = tif.asarray() assert_array_equal(data, image) assert_aszarr_method(tif, image) assert__str__(tif) def test_write_scalar(): """Test write 2D grayscale.""" data = random_data(numpy.uint8, (219, 301)) with TempFileName('write_scalar') as fname: imwrite(fname, data) assert_valid_tiff(fname) with TiffFile(fname) as tif: assert len(tif.pages) == 1 page = tif.pages.first assert page.is_contiguous assert page.planarconfig == PLANARCONFIG.CONTIG assert page.photometric != PHOTOMETRIC.RGB assert page.imagewidth == 301 assert page.imagelength == 219 assert page.samplesperpixel == 1 image = tif.asarray() assert_array_equal(data, image) assert_aszarr_method(tif, image) assert__str__(tif) def test_write_scalar_3d(): """Test write 3D grayscale.""" data = random_data(numpy.uint8, (63, 219, 301)) with TempFileName('write_scalar_3d') as fname: imwrite(fname, data) assert_valid_tiff(fname) with TiffFile(fname) as tif: assert len(tif.pages) == 63 page = tif.pages[62] assert page.is_contiguous assert page.planarconfig == PLANARCONFIG.CONTIG assert page.photometric != PHOTOMETRIC.RGB assert page.imagewidth == 301 assert page.imagelength == 219 assert page.samplesperpixel == 1 image = tif.asarray() assert isinstance(image, numpy.ndarray) assert_array_equal(data, image) assert_aszarr_method(tif, image) assert__str__(tif) def test_write_scalar_4d(): """Test write 4D grayscale.""" data = random_data(numpy.uint8, (3, 2, 219, 301)) with TempFileName('write_scalar_4d') as fname: imwrite(fname, data) assert_valid_tiff(fname) with TiffFile(fname) as tif: assert len(tif.pages) == 6 page = tif.pages[5] assert page.is_contiguous assert page.planarconfig == PLANARCONFIG.CONTIG assert page.photometric != PHOTOMETRIC.RGB assert page.imagewidth == 301 assert page.imagelength == 219 assert page.samplesperpixel == 1 image = tif.asarray() assert_array_equal(data, image) assert_aszarr_method(tif, image) assert__str__(tif) def test_write_contig_extrasample(): """Test write grayscale with contig extrasamples.""" data = random_data(numpy.uint8, (219, 301, 2)) with TempFileName('write_contig_extrasample') as fname: imwrite(fname, data, planarconfig=PLANARCONFIG.CONTIG) assert_valid_tiff(fname) with TiffFile(fname) as tif: assert len(tif.pages) == 1 page = tif.pages.first assert page.is_contiguous assert page.planarconfig == PLANARCONFIG.CONTIG assert page.photometric != PHOTOMETRIC.RGB assert page.imagewidth == 301 assert page.imagelength == 219 assert page.samplesperpixel == 2 image = tif.asarray() assert_array_equal(data, image) assert_aszarr_method(tif, image) assert__str__(tif) def test_write_planar_extrasample(): """Test write grayscale with planar extrasamples.""" data = random_data(numpy.uint8, (2, 219, 301)) with TempFileName('write_planar_extrasample') as fname: imwrite(fname, data, planarconfig=PLANARCONFIG.SEPARATE) assert_valid_tiff(fname) with TiffFile(fname) as tif: assert len(tif.pages) == 1 page = tif.pages.first assert page.is_contiguous assert page.planarconfig == PLANARCONFIG.SEPARATE assert page.photometric != PHOTOMETRIC.RGB assert page.imagewidth == 301 assert page.imagelength == 219 assert page.samplesperpixel == 2 image = tif.asarray() assert_array_equal(data, image) assert_aszarr_method(tif, image) assert__str__(tif) def test_write_auto_rgb_contig(): """Test write auto contig RGB.""" data = random_data(numpy.uint8, (219, 301, 3)) with TempFileName('write_auto_rgb_contig') as fname: imwrite(fname, data) # photometric=RGB assert_valid_tiff(fname) with TiffFile(fname) as tif: assert len(tif.pages) == 1 page = tif.pages.first assert page.is_contiguous assert page.planarconfig == PLANARCONFIG.CONTIG assert page.photometric == PHOTOMETRIC.RGB assert page.imagewidth == 301 assert page.imagelength == 219 assert page.samplesperpixel == 3 image = tif.asarray() assert_array_equal(data, image) assert_aszarr_method(tif, image) assert__str__(tif) def test_write_auto_rgb_planar(): """Test write auto planar RGB.""" data = random_data(numpy.uint8, (3, 219, 301)) with TempFileName('write_auto_rgb_planar') as fname: with pytest.warns(DeprecationWarning): imwrite(fname, data) # photometric=RGB, planarconfig=SEPARATE assert_valid_tiff(fname) with TiffFile(fname) as tif: assert len(tif.pages) == 1 page = tif.pages.first assert page.is_contiguous assert page.planarconfig == PLANARCONFIG.SEPARATE assert page.photometric == PHOTOMETRIC.RGB assert page.imagewidth == 301 assert page.imagelength == 219 assert page.samplesperpixel == 3 image = tif.asarray() assert_array_equal(data, image) assert_aszarr_method(tif, image) assert__str__(tif) def test_write_auto_rgba_contig(): """Test write auto contig RGBA.""" data = random_data(numpy.uint8, (219, 301, 4)) with TempFileName('write_auto_rgba_contig') as fname: imwrite(fname, data) # photometric=RGB assert_valid_tiff(fname) with TiffFile(fname) as tif: assert len(tif.pages) == 1 page = tif.pages.first assert page.is_contiguous assert page.planarconfig == PLANARCONFIG.CONTIG assert page.photometric == PHOTOMETRIC.RGB assert page.imagewidth == 301 assert page.imagelength == 219 assert page.samplesperpixel == 4 assert page.extrasamples[0] == EXTRASAMPLE.UNASSALPHA image = tif.asarray() assert_array_equal(data, image) assert_aszarr_method(tif, image) assert__str__(tif) def test_write_auto_rgba_planar(): """Test write auto planar RGBA.""" data = random_data(numpy.uint8, (4, 219, 301)) with TempFileName('write_auto_rgba_planar') as fname: with pytest.warns(DeprecationWarning): imwrite(fname, data) # photometric=RGB, planarconfig=SEPARATE assert_valid_tiff(fname) with TiffFile(fname) as tif: assert len(tif.pages) == 1 page = tif.pages.first assert page.is_contiguous assert page.planarconfig == PLANARCONFIG.SEPARATE assert page.photometric == PHOTOMETRIC.RGB assert page.imagewidth == 301 assert page.imagelength == 219 assert page.samplesperpixel == 4 assert page.extrasamples[0] == EXTRASAMPLE.UNASSALPHA image = tif.asarray() assert_array_equal(data, image) assert_aszarr_method(tif, image) assert__str__(tif) def test_write_extrasamples_contig_rgb(): """Test write contig RGB with extrasamples.""" data = random_data(numpy.uint8, (219, 301, 8)) with TempFileName('write_extrasamples_contig') as fname: imwrite(fname, data, photometric=PHOTOMETRIC.RGB) assert_valid_tiff(fname) with TiffFile(fname) as tif: assert len(tif.pages) == 1 page = tif.pages.first assert page.is_contiguous assert page.planarconfig == PLANARCONFIG.CONTIG assert page.photometric == PHOTOMETRIC.RGB assert page.imagewidth == 301 assert page.imagelength == 219 assert page.samplesperpixel == 8 assert len(page.extrasamples) == 5 assert page.extrasamples[0] == EXTRASAMPLE.UNSPECIFIED image = tif.asarray() assert_array_equal(data, image) assert_aszarr_method(tif, image) assert__str__(tif) def test_write_extrasamples_planar_rgb(): """Test write planar RGB with extrasamples.""" data = random_data(numpy.uint8, (8, 219, 301)) with TempFileName('write_extrasamples_planar') as fname: imwrite(fname, data, photometric=PHOTOMETRIC.RGB) assert_valid_tiff(fname) with TiffFile(fname) as tif: assert len(tif.pages) == 1 page = tif.pages.first assert page.is_contiguous assert page.planarconfig == PLANARCONFIG.SEPARATE assert page.photometric == PHOTOMETRIC.RGB assert page.imagewidth == 301 assert page.imagelength == 219 assert page.samplesperpixel == 8 assert len(page.extrasamples) == 5 assert page.extrasamples[0] == EXTRASAMPLE.UNSPECIFIED image = tif.asarray() assert_array_equal(data, image) assert_aszarr_method(tif, image) assert__str__(tif) @pytest.mark.skipif(SKIP_CODECS, reason=REASON) def test_write_iccprofile(): """Test write RGB with ICC profile.""" data = random_data(numpy.uint8, (219, 301, 3)) iccprofile = imagecodecs.cms_profile('srgb') with TempFileName('write_iccprofile') as fname: imwrite( fname, data, photometric=PHOTOMETRIC.RGB, iccprofile=iccprofile ) assert_valid_tiff(fname) with TiffFile(fname) as tif: assert len(tif.pages) == 1 page = tif.pages.first assert page.is_contiguous assert page.planarconfig == PLANARCONFIG.CONTIG assert page.photometric == PHOTOMETRIC.RGB assert page.imagewidth == 301 assert page.imagelength == 219 assert page.samplesperpixel == 3 assert page.tags[34675].dtype == DATATYPE.UNDEFINED assert page.iccprofile == iccprofile imagecodecs.cms_profile_validate(page.iccprofile) tif.asarray() assert__str__(tif) @pytest.mark.skipif(SKIP_PRIVATE or SKIP_CODECS, reason=REASON) def test_write_cfa(): """Test write uncompressed CFA image.""" # TODO: write a valid TIFF/EP file data = imread( private_file('DNG/cinemadng/M14-1451_000085_cDNG_uncompressed.dng') ) extratags = [ (271, 's', 4, 'Make', False), (272, 's', 5, 'Model', False), (33421, 'H', 2, (2, 2), False), # CFARepeatPatternDim (33422, 'B', 4, b'\x00\x01\x01\x02', False), # CFAPattern # (37398, 'B', 4, b'\x01\x00\x00\x00', False), # TIFF/EPStandardID # (37399, 'H', 1, 0) # SensingMethod Undefined # (50706, 'B', 4, b'\x01\x04\x00\x00', False), # DNGVersion ] with TempFileName('write_cfa') as fname: imwrite( fname, data, photometric=PHOTOMETRIC.CFA, software='Tifffile', datetime=True, extratags=extratags, ) with TiffFile(fname) as tif: assert len(tif.pages) == 1 page = tif.pages.first assert page.compression == 1 assert page.photometric == PHOTOMETRIC.CFA assert page.imagewidth == 960 assert page.imagelength == 540 assert page.bitspersample == 16 assert page.tags['CFARepeatPatternDim'].value == (2, 2) assert page.tags['CFAPattern'].value == b'\x00\x01\x01\x02' assert_array_equal(page.asarray(), data) assert_aszarr_method(page, data) def test_write_tiled_compressed(): """Test write compressed tiles.""" data = random_data(numpy.uint8, (3, 219, 301)) with TempFileName('write_tiled_compressed') as fname: imwrite( fname, data, photometric=PHOTOMETRIC.RGB, planarconfig=PLANARCONFIG.SEPARATE, compression=COMPRESSION.ADOBE_DEFLATE, compressionargs={'level': -1}, tile=(96, 64), ) assert_valid_tiff(fname) with TiffFile(fname) as tif: assert len(tif.pages) == 1 page = tif.pages.first assert page.is_tiled assert not page.is_contiguous assert page.planarconfig == PLANARCONFIG.SEPARATE assert page.photometric == PHOTOMETRIC.RGB assert page.imagewidth == 301 assert page.imagelength == 219 assert page.tilewidth == 64 assert page.tilelength == 96 assert page.samplesperpixel == 3 image = tif.asarray() assert_array_equal(data, image) assert_aszarr_method(tif, image) assert__str__(tif) def test_write_tiled(): """Test write tiled.""" data = random_data(numpy.uint16, (219, 301)) with TempFileName('write_tiled') as fname: imwrite(fname, data, tile=(96, 64)) assert_valid_tiff(fname) with TiffFile(fname) as tif: assert len(tif.pages) == 1 page = tif.pages.first assert page.is_tiled assert not page.is_contiguous assert page.planarconfig == PLANARCONFIG.CONTIG assert page.photometric != PHOTOMETRIC.RGB assert page.imagewidth == 301 assert page.imagelength == 219 assert page.tilewidth == 64 assert page.tilelength == 96 assert page.samplesperpixel == 1 image = tif.asarray() assert_array_equal(data, image) assert_aszarr_method(tif, image) assert__str__(tif) def test_write_tiled_planar(): """Test write planar tiles.""" data = random_data(numpy.uint8, (4, 219, 301)) with TempFileName('write_tiled_planar') as fname: imwrite( fname, data, tile=(96, 64), photometric=PHOTOMETRIC.RGB, planarconfig=PLANARCONFIG.SEPARATE, ) assert_valid_tiff(fname) with TiffFile(fname) as tif: assert len(tif.pages) == 1 page = tif.pages.first assert page.is_tiled assert not page.is_contiguous assert not page.is_volumetric assert page.planarconfig == PLANARCONFIG.SEPARATE assert page.photometric == PHOTOMETRIC.RGB assert page.imagewidth == 301 assert page.imagelength == 219 assert page.tilewidth == 64 assert page.tilelength == 96 assert page.samplesperpixel == 4 image = tif.asarray() assert_array_equal(data, image) assert_aszarr_method(tif, image) assert__str__(tif) def test_write_tiled_contig(): """Test write contig tiles.""" data = random_data(numpy.uint8, (219, 301, 3)) with TempFileName('write_tiled_contig') as fname: imwrite(fname, data, tile=(96, 64), photometric=PHOTOMETRIC.RGB) assert_valid_tiff(fname) with TiffFile(fname) as tif: assert len(tif.pages) == 1 page = tif.pages.first assert page.is_tiled assert not page.is_contiguous assert page.planarconfig == PLANARCONFIG.CONTIG assert page.photometric == PHOTOMETRIC.RGB assert page.imagewidth == 301 assert page.imagelength == 219 assert page.tilewidth == 64 assert page.tilelength == 96 assert page.samplesperpixel == 3 image = tif.asarray() assert_array_equal(data, image) assert_aszarr_method(tif, image) assert__str__(tif) def test_write_tiled_pages(): """Test write multiple tiled pages.""" data = random_data(numpy.uint8, (5, 219, 301, 3)) with TempFileName('write_tiled_pages') as fname: imwrite(fname, data, tile=(96, 64), photometric=PHOTOMETRIC.RGB) assert_valid_tiff(fname) with TiffFile(fname) as tif: assert len(tif.pages) == 5 page = tif.pages.first assert page.is_tiled assert not page.is_contiguous assert page.planarconfig == PLANARCONFIG.CONTIG assert page.photometric == PHOTOMETRIC.RGB assert not page.is_volumetric assert page.imagewidth == 301 assert page.imagelength == 219 assert page.tilewidth == 64 assert page.tilelength == 96 assert page.samplesperpixel == 3 image = tif.asarray() assert_array_equal(data, image) assert_aszarr_method(tif, image) assert__str__(tif) @pytest.mark.parametrize('compression', [1, 8]) def test_write_iter_tiles(compression): """Test write tiles from iterator.""" data = random_data(numpy.uint16, (12, 16, 16)) def tiles(): for i in range(data.shape[0]): yield data[i] with TempFileName(f'write_iter_tiles_{compression}') as fname: with pytest.raises((StopIteration, RuntimeError)): # missing tiles imwrite( fname, tiles(), shape=(43, 81), tile=(16, 16), dtype=numpy.uint16, compression=compression, ) with pytest.raises(ValueError): # missing parameters imwrite(fname, tiles(), compression=compression) with pytest.raises(ValueError): # missing parameters imwrite(fname, tiles(), shape=(43, 81), compression=compression) with pytest.raises(ValueError): # dtype mismatch imwrite( fname, tiles(), shape=(43, 61), tile=(16, 16), dtype=numpy.uint32, compression=compression, ) with pytest.raises(ValueError): # shape mismatch imwrite( fname, tiles(), shape=(43, 61), tile=(8, 8), dtype=numpy.uint16, compression=compression, ) imwrite( fname, tiles(), shape=(43, 61), tile=(16, 16), dtype=numpy.uint16, compression=compression, ) with TiffFile(fname) as tif: page = tif.pages.first assert page.shape == (43, 61) assert page.tilelength == 16 assert page.tilewidth == 16 assert page.compression == compression image = page.asarray() assert_array_equal(image[:16, :16], data[0]) for i, segment in enumerate(page.segments()): assert_array_equal(numpy.squeeze(segment[0]), data[i]) @pytest.mark.parametrize('compression', [1, 8]) def test_write_iter_tiles_separate(compression): """Test write separate tiles from iterator.""" data = random_data(numpy.uint16, (24, 16, 16)) def tiles(): for i in range(data.shape[0]): yield data[i] with TempFileName(f'write_iter_tiles_separate_{compression}') as fname: imwrite( fname, tiles(), shape=(2, 43, 61), tile=(16, 16), dtype=numpy.uint16, planarconfig=PLANARCONFIG.SEPARATE, compression=compression, ) with TiffFile(fname) as tif: assert len(tif.pages) == 1 page = tif.pages.first assert page.shape == (2, 43, 61) assert page.tilelength == 16 assert page.tilewidth == 16 assert page.planarconfig == 2 image = page.asarray() assert_array_equal(image[0, :16, :16], data[0]) for i, segment in enumerate(page.segments()): assert_array_equal(numpy.squeeze(segment[0]), data[i]) @pytest.mark.parametrize('compression', [1, 8]) def test_write_iter_tiles_none(compression): """Test write tiles from iterator with missing tiles. Missing tiles are not with tileoffset=0 and tilebytecount=0. """ data = random_data(numpy.uint16, (12, 16, 16)) def tiles(): for i in range(data.shape[0]): if i % 3 == 1: data[i] = 0 yield None else: yield data[i] with TempFileName(f'write_iter_tiles_none_{compression}') as fname: imwrite( fname, tiles(), shape=(43, 61), tile=(16, 16), dtype=numpy.uint16, compression=compression, ) with TiffFile(fname) as tif: page = tif.pages.first assert page.shape == (43, 61) assert page.tilelength == 16 assert page.tilewidth == 16 assert page.databytecounts[1] == 0 assert page.dataoffsets[1] == 0 image = page.asarray() assert_array_equal(image[:16, :16], data[0]) for i, segment in enumerate(page.segments()): if i % 3 == 1: assert segment[0] is None else: assert_array_equal(numpy.squeeze(segment[0]), data[i]) @pytest.mark.parametrize('compression', [1, 8]) def test_write_iter_tiles_bytes(compression): """Test write tiles from iterator of bytes.""" data = random_data(numpy.uint16, (5, 3, 15, 17)) with TempFileName(f'write_iter_tiles_bytes_{compression}') as fname: imwrite( fname + 'f', data, tile=(16, 16), compression=compression, planarconfig='separate', photometric='rgb', ) def tiles(): with TiffFile(fname + 'f') as tif: fh = tif.filehandle for page in tif.pages: for offset, bytecount in zip( page.dataoffsets, page.databytecounts ): fh.seek(offset) strip = fh.read(bytecount) yield strip imwrite( fname, tiles(), shape=data.shape, dtype=data.dtype, tile=(16, 16), compression=compression, planarconfig='separate', photometric='rgb', ) assert_array_equal(imread(fname), data) @pytest.mark.parametrize('compression', [1, 8]) @pytest.mark.parametrize('rowsperstrip', [5, 16]) def test_write_iter_strips_bytes(compression, rowsperstrip): """Test write strips from iterator of bytes.""" data = random_data(numpy.uint16, (5, 3, 16, 16)) with TempFileName( f'write_iter_strips_bytes_{compression}{rowsperstrip}' ) as fname: imwrite( fname + 'f', data, rowsperstrip=rowsperstrip, compression=compression, planarconfig='separate', photometric='rgb', ) def strips(): with TiffFile(fname + 'f') as tif: fh = tif.filehandle for page in tif.pages: for offset, bytecount in zip( page.dataoffsets, page.databytecounts ): fh.seek(offset) strip = fh.read(bytecount) yield strip imwrite( fname, strips(), shape=data.shape, dtype=data.dtype, rowsperstrip=rowsperstrip, compression=compression, planarconfig='separate', photometric='rgb', ) assert_array_equal(imread(fname), data) @pytest.mark.parametrize('compression', [1, 8]) @pytest.mark.parametrize('rowsperstrip', [5, 16]) def test_write_iter_pages_none(compression, rowsperstrip): """Test write pages from iterator with missing pages. Missing pages are written as zeros. """ data = random_data(numpy.uint16, (12, 16, 16)) def pages(): for i in range(data.shape[0]): if i % 3 == 1: data[i] = 0 yield None else: yield data[i] with TempFileName( f'write_iter_pages_none_{compression}{rowsperstrip}' ) as fname: imwrite( fname, pages(), shape=(12, 16, 16), dtype=numpy.uint16, rowsperstrip=rowsperstrip, compression=compression, ) with TiffFile(fname) as tif: for i, page in enumerate(tif.pages): assert page.shape == (16, 16) assert page.rowsperstrip == rowsperstrip assert_array_equal(page.asarray(), data[i]) for j, segment in enumerate(page.segments()): assert_array_equal( numpy.squeeze(segment[0]), numpy.squeeze( data[i, j * rowsperstrip : (j + 1) * rowsperstrip] ), ) def test_write_pyramids(): """Test write two pyramids to shaped file.""" data = random_data(numpy.uint8, (31, 64, 96, 3)) with TempFileName('write_pyramids') as fname: with TiffWriter(fname) as tif: # use pages tif.write(data, tile=(16, 16), photometric=PHOTOMETRIC.RGB) # interrupt pyramid, for example thumbnail tif.write(data[0, :, :, 0]) # pyramid levels tif.write( data[:, ::2, ::2], tile=(16, 16), subfiletype=FILETYPE.REDUCEDIMAGE, photometric=PHOTOMETRIC.RGB, ) tif.write( data[:, ::4, ::4], tile=(16, 16), subfiletype=FILETYPE.REDUCEDIMAGE, photometric=PHOTOMETRIC.RGB, ) # second pyramid using volumetric with downsampling factor 3 tif.write(data, tile=(16, 16, 16), photometric=PHOTOMETRIC.RGB) tif.write( data[::3, ::3, ::3], tile=(16, 16, 16), subfiletype=FILETYPE.REDUCEDIMAGE, photometric=PHOTOMETRIC.RGB, ) assert_valid_tiff(fname) with TiffFile(fname) as tif: assert len(tif.pages) == 3 * 31 + 2 + 1 assert len(tif.series) == 3 series = tif.series[0] assert series.kind == 'shaped' assert series.is_pyramidal assert len(series.levels) == 3 assert len(series.levels[0].pages) == 31 assert len(series.levels[1].pages) == 31 assert len(series.levels[2].pages) == 31 assert series.levels[0].shape == (31, 64, 96, 3) assert series.levels[1].shape == (31, 32, 48, 3) assert series.levels[2].shape == (31, 16, 24, 3) series = tif.series[1] assert series.kind == 'shaped' assert not series.is_pyramidal assert series.shape == (64, 96) series = tif.series[2] assert series.kind == 'shaped' assert series.is_pyramidal assert len(series.levels) == 2 assert len(series.levels[0].pages) == 1 assert len(series.levels[1].pages) == 1 assert series.levels[0].keyframe.is_volumetric assert series.levels[1].keyframe.is_volumetric assert series.levels[0].shape == (31, 64, 96, 3) assert series.levels[1].shape == (11, 22, 32, 3) assert_array_equal(tif.asarray(), data) assert_array_equal(tif.asarray(series=0, level=0), data) assert_aszarr_method(tif, data, series=0, level=0) assert_array_equal( data[:, ::2, ::2], tif.asarray(series=0, level=1) ) assert_aszarr_method(tif, data[:, ::2, ::2], series=0, level=1) assert_array_equal( data[:, ::4, ::4], tif.asarray(series=0, level=2) ) assert_aszarr_method(tif, data[:, ::4, ::4], series=0, level=2) assert_array_equal(data[0, :, :, 0], tif.asarray(series=1)) assert_aszarr_method(tif, data[0, :, :, 0], series=1) assert_array_equal(data, tif.asarray(series=2, level=0)) assert_aszarr_method(tif, data, series=2, level=0) assert_array_equal( data[::3, ::3, ::3], tif.asarray(series=2, level=1) ) assert_aszarr_method(tif, data[::3, ::3, ::3], series=2, level=1) assert__str__(tif) def test_write_volumetric_tiled(): """Test write tiled volume.""" data = random_data(numpy.uint8, (253, 64, 96)) with TempFileName('write_volumetric_tiled') as fname: imwrite(fname, data, tile=(64, 64, 64)) assert_valid_tiff(fname) with TiffFile(fname) as tif: assert len(tif.pages) == 1 page = tif.pages.first assert page.is_volumetric assert page.is_tiled assert not page.is_contiguous assert page.planarconfig == PLANARCONFIG.CONTIG assert page.photometric != PHOTOMETRIC.RGB assert page.imagewidth == 96 assert page.imagelength == 64 assert page.imagedepth == 253 assert page.tilewidth == 64 assert page.tilelength == 64 assert page.tiledepth == 64 assert page.tile == (64, 64, 64) assert page.samplesperpixel == 1 image = tif.asarray() assert_array_equal(data, image) assert_aszarr_method(tif, image) assert__str__(tif) @pytest.mark.skipif(SKIP_CODECS, reason=REASON) def test_write_volumetric_tiled_png(): """Test write tiled volume using an image compressor.""" data = random_data(numpy.uint8, (16, 64, 96, 3)) with TempFileName('write_volumetric_tiled_png') as fname: imwrite( fname, data, tile=(1, 64, 64), photometric=PHOTOMETRIC.RGB, compression=COMPRESSION.PNG, ) assert_valid_tiff(fname) with TiffFile(fname) as tif: assert len(tif.pages) == 1 page = tif.pages.first assert page.is_volumetric assert page.is_tiled assert page.compression == COMPRESSION.PNG assert not page.is_contiguous assert page.planarconfig == PLANARCONFIG.CONTIG assert page.photometric == PHOTOMETRIC.RGB assert page.imagewidth == 96 assert page.imagelength == 64 assert page.imagedepth == 16 assert page.tilewidth == 64 assert page.tilelength == 64 assert page.tiledepth == 1 assert page.samplesperpixel == 3 image = tif.asarray() assert_array_equal(data, image) assert_aszarr_method(tif, image) assert__str__(tif) def test_write_volumetric_tiled_planar_rgb(): """Test write 5D array as grayscale volumes.""" shape = (2, 3, 256, 64, 96) data = numpy.empty(shape, dtype=numpy.uint8) data[:] = numpy.arange(256, dtype=numpy.uint8).reshape(1, 1, -1, 1, 1) with TempFileName('write_volumetric_tiled_planar_rgb') as fname: imwrite( fname, data, tile=(256, 64, 96), photometric=PHOTOMETRIC.RGB, planarconfig=PLANARCONFIG.SEPARATE, ) assert_valid_tiff(fname) with TiffFile(fname) as tif: assert len(tif.pages) == 2 page = tif.pages.first assert page.is_volumetric assert page.is_tiled assert page.is_contiguous assert page.planarconfig == PLANARCONFIG.SEPARATE assert page.photometric == PHOTOMETRIC.RGB assert page.imagewidth == 96 assert page.imagelength == 64 assert page.imagedepth == 256 assert page.tilewidth == 96 assert page.tilelength == 64 assert page.tiledepth == 256 assert page.samplesperpixel == 3 series = tif.series[0] assert series.kind == 'shaped' assert len(series._pages) == 1 assert len(series.pages) == 2 assert series.dataoffset is not None assert series.shape == shape image = tif.asarray() assert_array_equal(data, image) assert_aszarr_method(tif, image) assert__str__(tif) def test_write_volumetric_tiled_contig_rgb(): """Test write 6D array as contig RGB volumes.""" shape = (2, 3, 256, 64, 96, 3) data = numpy.empty(shape, dtype=numpy.uint8) data[:] = numpy.arange(256, dtype=numpy.uint8).reshape(1, 1, -1, 1, 1, 1) with TempFileName('write_volumetric_tiled_contig_rgb') as fname: imwrite(fname, data, tile=(256, 64, 96), photometric=PHOTOMETRIC.RGB) assert_valid_tiff(fname) with TiffFile(fname) as tif: assert len(tif.pages) == 6 page = tif.pages.first assert page.is_volumetric assert page.is_tiled assert page.is_contiguous assert page.planarconfig == PLANARCONFIG.CONTIG assert page.photometric == PHOTOMETRIC.RGB assert page.imagewidth == 96 assert page.imagelength == 64 assert page.imagedepth == 256 assert page.tilewidth == 96 assert page.tilelength == 64 assert page.tiledepth == 256 assert page.samplesperpixel == 3 # self.assertEqual(page.tags['TileOffsets'].value, (352,)) assert page.tags['TileByteCounts'].value == (4718592,) series = tif.series[0] assert series.kind == 'shaped' assert len(series._pages) == 1 assert len(series.pages) == 6 assert series.dataoffset is not None assert series.shape == shape image = tif.asarray() assert_array_equal(data, image) # assert iterating over series.pages data = data.reshape(6, 256, 64, 96, 3) for i, page in enumerate(series.pages): image = page.asarray() assert_array_equal(data[i], image) assert_aszarr_method(page, image) assert__str__(tif) @pytest.mark.skipif(SKIP_LARGE, reason=REASON) def test_write_volumetric_tiled_contig_rgb_empty(): """Test write empty 6D array as contig RGB volumes.""" shape = (2, 3, 256, 64, 96, 3) with TempFileName('write_volumetric_tiled_contig_rgb_empty') as fname: with TiffWriter(fname) as tif: tif.write( shape=shape, dtype=numpy.uint8, tile=(256, 64, 96), photometric=PHOTOMETRIC.RGB, ) assert_valid_tiff(fname) with TiffFile(fname) as tif: assert len(tif.pages) == 6 page = tif.pages.first assert page.is_volumetric assert page.is_tiled assert page.is_contiguous assert page.planarconfig == PLANARCONFIG.CONTIG assert page.photometric == PHOTOMETRIC.RGB assert page.imagewidth == 96 assert page.imagelength == 64 assert page.imagedepth == 256 assert page.tilewidth == 96 assert page.tilelength == 64 assert page.tiledepth == 256 assert page.samplesperpixel == 3 # self.assertEqual(page.tags['TileOffsets'].value, (352,)) assert page.tags['TileByteCounts'].value == (4718592,) series = tif.series[0] assert series.kind == 'shaped' assert len(series._pages) == 1 assert len(series.pages) == 6 assert series.dataoffset is not None assert series.shape == shape image = tif.asarray() assert_array_equal(image.shape, shape) assert_aszarr_method(tif, image) assert__str__(tif) def test_write_volumetric_striped(): """Test write striped volume.""" data = random_data(numpy.uint8, (15, 63, 95)) with TempFileName('write_volumetric_striped') as fname: imwrite(fname, data, volumetric=True) assert_valid_tiff(fname) with TiffFile(fname) as tif: assert len(tif.pages) == 1 page = tif.pages.first assert page.is_volumetric assert not page.is_tiled assert page.is_contiguous assert page.planarconfig == PLANARCONFIG.CONTIG assert page.photometric != PHOTOMETRIC.RGB assert page.imagewidth == 95 assert page.imagelength == 63 assert page.imagedepth == 15 assert len(page.dataoffsets) == 15 assert len(page.databytecounts) == 15 assert page.samplesperpixel == 1 image = tif.asarray() assert_array_equal(data, image) assert_aszarr_method(tif, image) assert__str__(tif) @pytest.mark.skipif(SKIP_CODECS, reason=REASON) def test_write_volumetric_striped_png(): """Test write tiled volume using an image compressor.""" data = random_data(numpy.uint8, (15, 63, 95, 3)) with TempFileName('write_volumetric_striped_png') as fname: imwrite( fname, data, photometric=PHOTOMETRIC.RGB, volumetric=True, rowsperstrip=32, compression=COMPRESSION.PNG, ) assert_valid_tiff(fname) with TiffFile(fname) as tif: assert len(tif.pages) == 1 page = tif.pages.first assert page.is_volumetric assert not page.is_tiled assert page.compression == COMPRESSION.PNG assert not page.is_contiguous assert page.planarconfig == PLANARCONFIG.CONTIG assert page.photometric == PHOTOMETRIC.RGB assert page.imagewidth == 95 assert page.imagelength == 63 assert page.imagedepth == 15 assert page.samplesperpixel == 3 assert len(page.dataoffsets) == 30 assert len(page.databytecounts) == 30 image = tif.asarray() assert_array_equal(data, image) assert_aszarr_method(tif, image) assert_aszarr_method(tif, image, chunkmode='page') assert__str__(tif) def test_write_volumetric_striped_planar_rgb(): """Test write 5D array as grayscale volumes.""" shape = (2, 3, 15, 63, 96) data = numpy.empty(shape, dtype=numpy.uint8) data[:] = numpy.arange(15, dtype=numpy.uint8).reshape(1, 1, -1, 1, 1) with TempFileName('write_volumetric_striped_planar_rgb') as fname: imwrite(fname, data, volumetric=True, photometric=PHOTOMETRIC.RGB) assert_valid_tiff(fname) with TiffFile(fname) as tif: assert len(tif.pages) == 2 page = tif.pages.first assert page.is_volumetric assert not page.is_tiled assert page.is_contiguous assert page.planarconfig == PLANARCONFIG.SEPARATE assert page.photometric == PHOTOMETRIC.RGB assert page.imagewidth == 96 assert page.imagelength == 63 assert page.imagedepth == 15 assert page.samplesperpixel == 3 assert len(page.dataoffsets) == 15 * 3 assert len(page.databytecounts) == 15 * 3 series = tif.series[0] assert series.kind == 'shaped' assert len(series._pages) == 1 assert len(series.pages) == 2 assert series.dataoffset is not None assert series.shape == shape image = tif.asarray() assert_array_equal(data, image) assert_aszarr_method(tif, image) assert__str__(tif) def test_write_volumetric_striped_contig_rgb(): """Test write 6D array as contig RGB volumes.""" shape = (2, 3, 15, 63, 95, 3) data = numpy.empty(shape, dtype=numpy.uint8) data[:] = numpy.arange(15, dtype=numpy.uint8).reshape(1, 1, -1, 1, 1, 1) with TempFileName('write_volumetric_striped_contig_rgb') as fname: imwrite(fname, data, volumetric=True, photometric=PHOTOMETRIC.RGB) assert_valid_tiff(fname) with TiffFile(fname) as tif: assert len(tif.pages) == 6 page = tif.pages.first assert page.is_volumetric assert not page.is_tiled assert page.is_contiguous assert page.planarconfig == PLANARCONFIG.CONTIG assert page.photometric == PHOTOMETRIC.RGB assert page.imagewidth == 95 assert page.imagelength == 63 assert page.imagedepth == 15 assert page.samplesperpixel == 3 assert len(page.dataoffsets) == 15 assert len(page.databytecounts) == 15 series = tif.series[0] assert series.kind == 'shaped' assert len(series._pages) == 1 assert len(series.pages) == 6 assert series.dataoffset is not None assert series.shape == shape image = tif.asarray() assert_array_equal(data, image) # assert iterating over series.pages data = data.reshape((6, 15, 63, 95, 3)) for i, page in enumerate(series.pages): image = page.asarray() assert_array_equal(data[i], image) assert_aszarr_method(page, image) assert__str__(tif) @pytest.mark.skipif(SKIP_LARGE, reason=REASON) def test_write_volumetric_striped_contig_rgb_empty(): """Test write empty 6D array as contig RGB volumes.""" shape = (2, 3, 15, 63, 95, 3) with TempFileName('write_volumetric_striped_contig_rgb_empty') as fname: with TiffWriter(fname) as tif: tif.write( shape=shape, dtype=numpy.uint8, volumetric=True, photometric=PHOTOMETRIC.RGB, ) assert_valid_tiff(fname) with TiffFile(fname) as tif: assert len(tif.pages) == 6 page = tif.pages.first assert page.is_volumetric assert not page.is_tiled assert page.is_contiguous assert page.planarconfig == PLANARCONFIG.CONTIG assert page.photometric == PHOTOMETRIC.RGB assert page.imagewidth == 95 assert page.imagelength == 63 assert page.imagedepth == 15 assert page.samplesperpixel == 3 assert len(page.dataoffsets) == 15 assert len(page.databytecounts) == 15 series = tif.series[0] assert series.kind == 'shaped' assert len(series._pages) == 1 assert len(series.pages) == 6 assert series.dataoffset is not None assert series.shape == shape image = tif.asarray() assert_array_equal(image.shape, shape) assert_aszarr_method(tif, image) assert__str__(tif) def test_write_contiguous(): """Test contiguous mode.""" data = random_data(numpy.uint8, (5, 4, 219, 301, 3)) with TempFileName('write_contiguous') as fname: with TiffWriter(fname, bigtiff=True) as tif: for i in range(data.shape[0]): tif.write( data[i], contiguous=True, photometric=PHOTOMETRIC.RGB ) # assert_jhove(fname) with TiffFile(fname) as tif: assert tif.is_bigtiff assert len(tif.pages) == 20 # check metadata is updated in-place assert tif.pages.first.tags[270].valueoffset < tif.pages[1].offset for page in tif.pages: assert page.is_contiguous assert page.planarconfig == PLANARCONFIG.CONTIG assert page.photometric == PHOTOMETRIC.RGB assert page.imagewidth == 301 assert page.imagelength == 219 assert page.samplesperpixel == 3 image = tif.asarray() assert_array_equal(data, image) assert_aszarr_method(tif, image) assert__str__(tif) @pytest.mark.skipif(SKIP_LARGE, reason=REASON) def test_write_3gb(): """Test write 3 GB non-BigTIFF file.""" # https://github.com/blink1073/tifffile/issues/47 data = numpy.empty((4096 - 32, 1024, 1024), dtype=numpy.uint8) with TempFileName('write_3gb') as fname: imwrite(fname, data) del data assert_valid_tiff(fname) # assert file with TiffFile(fname) as tif: assert not tif.is_bigtiff @pytest.mark.skipif(SKIP_LARGE, reason=REASON) def test_write_6gb(): """Test write 6 GB non-BigTIFF file.""" # https://stackoverflow.com/questions/74930263 data = numpy.empty((2**16, 2**15, 3), dtype=numpy.uint8) with TempFileName('write_6gb') as fname: imwrite( fname, data, bigtiff=False, photometric='rgb', rowsperstrip=2**15 ) del data assert_valid_tiff(fname) # assert file with TiffFile(fname) as tif: assert not tif.is_bigtiff assert tif.pages.first.dataoffsets[1] > 2**16 assert tif.pages.first.databytecounts[1] == 3221225472 # image = tif.asarray() # assert_array_equal(data, image) @pytest.mark.skipif(SKIP_LARGE, reason=REASON) def test_write_5GB_fails(): """Test data too large for non-BigTIFF file.""" # TiffWriter should fail without bigtiff parameter data = numpy.empty((640, 1024, 1024), dtype=numpy.float64) data[:] = numpy.arange(640, dtype=numpy.float64).reshape(-1, 1, 1) with TempFileName('write_5GB_fails') as fname: with pytest.raises(ValueError): with TiffWriter(fname) as tif: tif.write(data) # TODO: test 'unclosed file' not in capsys @pytest.mark.skipif(SKIP_LARGE, reason=REASON) def test_write_5GB_bigtiff(): """Test write 5GB BigTiff file.""" data = numpy.empty((640, 1024, 1024), dtype=numpy.float64) data[:] = numpy.arange(640, dtype=numpy.float64).reshape(-1, 1, 1) with TempFileName('write_5GB_bigtiff') as fname: # imwrite should use bigtiff for large data imwrite(fname, data) # assert_jhove(fname) # assert file with TiffFile(fname) as tif: assert tif.is_bigtiff assert len(tif.pages) == 640 page = tif.pages.first assert page.is_contiguous assert page.planarconfig == PLANARCONFIG.CONTIG assert page.photometric != PHOTOMETRIC.RGB assert page.imagewidth == 1024 assert page.imagelength == 1024 assert page.samplesperpixel == 1 image = tif.asarray(out='memmap') assert_array_equal(data, image) del image del data assert__str__(tif) @pytest.mark.parametrize('compression', [0, 6]) @pytest.mark.parametrize('dtype', [numpy.uint8, numpy.uint16]) def test_write_palette(dtype, compression): """Test write palette images.""" dtype = numpy.dtype(dtype) data = random_data(dtype, (3, 219, 301)) cmap = random_data(numpy.uint16, (3, 2 ** (data.itemsize * 8))) with TempFileName(f'write_palette_{compression}{dtype}') as fname: imwrite( fname, data, colormap=cmap, compression=COMPRESSION.ADOBE_DEFLATE if compression else None, compressionargs={'level': compression} if compression else None, ) assert_valid_tiff(fname) with TiffFile(fname) as tif: assert len(tif.pages) == 3 page = tif.pages.first assert page.is_contiguous != bool(compression) assert page.planarconfig == PLANARCONFIG.CONTIG assert page.photometric == PHOTOMETRIC.PALETTE assert page.imagewidth == 301 assert page.imagelength == 219 assert page.samplesperpixel == 1 for i, page in enumerate(tif.pages): assert_array_equal(apply_colormap(data[i], cmap), page.asrgb()) assert__str__(tif) @pytest.mark.skipif(SKIP_PRIVATE, reason=REASON) def test_write_palette_django(): """Test write palette read from existing file.""" fname = private_file('django.tiff') with TiffFile(fname) as tif: page = tif.pages.first assert page.photometric == PHOTOMETRIC.PALETTE assert page.imagewidth == 320 assert page.imagelength == 480 data = page.asarray() # .squeeze() # UserWarning ... cmap = page.colormap assert__str__(tif) with TempFileName('write_palette_django') as fname: imwrite( fname, data, colormap=cmap, compression=COMPRESSION.ADOBE_DEFLATE ) assert_valid_tiff(fname) with TiffFile(fname) as tif: assert len(tif.pages) == 1 page = tif.pages.first assert not page.is_contiguous assert page.planarconfig == PLANARCONFIG.CONTIG assert page.photometric == PHOTOMETRIC.PALETTE assert page.imagewidth == 320 assert page.imagelength == 480 assert page.samplesperpixel == 1 image = page.asrgb(uint8=False) assert_array_equal(apply_colormap(data, cmap), image) assert__str__(tif) @pytest.mark.skipif(SKIP_PRIVATE, reason=REASON) def test_write_multiple_series(): """Test write multiple data into one file using various options.""" data1 = imread(private_file('ome/multi-channel-4D-series.ome.tif')) image1 = imread(private_file('django.tiff')) image2 = imread(private_file('horse-16bit-col-littleendian.tif')) with TempFileName('write_multiple_series') as fname: with TiffWriter(fname, bigtiff=False) as tif: # series 0 tif.write( image1, compression=COMPRESSION.ADOBE_DEFLATE, compressionargs={'level': 5}, description='Django', ) # series 1 tif.write(image2, photometric=PHOTOMETRIC.RGB) # series 2 tif.write(data1[0], metadata=dict(axes='TCZYX')) for i in range(1, data1.shape[0]): tif.write(data1[i], contiguous=True) # series 3 tif.write(data1[0], contiguous=False) # series 4 tif.write(data1[0, 0, 0], tile=(64, 64)) # series 5 tif.write( image1, compression=COMPRESSION.ADOBE_DEFLATE, description='DEFLATE', ) assert_valid_tiff(fname) with TiffFile(fname) as tif: assert len(tif.pages) == 124 assert len(tif.series) == 6 series = tif.series[0] assert not series.dataoffset assert series.axes == 'YX' assert series.kind == 'shaped' assert_array_equal(image1, series.asarray()) assert_aszarr_method(series, image1) series = tif.series[1] assert series.dataoffset assert series.axes == 'YXS' assert series.kind == 'shaped' assert_array_equal(image2, series.asarray()) assert_aszarr_method(series, image2) series = tif.series[2] assert series.dataoffset assert series.keyframe.is_contiguous assert len(series) == 105 assert len(series.pages) == 105 assert isinstance(series[104], TiffPage) assert len(series[range(105)]) == 105 assert len(series[slice(0, None, 2)]) == 53 with pytest.raises(TypeError): assert series['1'] assert series.axes == 'TCZYX' assert series.kind == 'shaped' result = series.asarray(out='memmap') assert_array_equal(data1, result) assert_aszarr_method(series, data1) assert tif.filehandle.path == result.filename del result series = tif.series[3] assert series.dataoffset assert series.axes == 'QQYX' assert series.kind == 'shaped' assert_array_equal(data1[0], series.asarray()) assert_aszarr_method(series, data1[0]) series = tif.series[4] assert not series.dataoffset assert series.axes == 'YX' assert series.kind == 'shaped' assert_array_equal(data1[0, 0, 0], series.asarray()) assert_aszarr_method(series, data1[0, 0, 0]) series = tif.series[5] assert not series.dataoffset assert series.axes == 'YX' assert series.kind == 'shaped' assert_array_equal(image1, series.asarray()) assert_aszarr_method(series, image1) assert__str__(tif) # test TiffFile.asarray key and series parameters assert_array_equal(image1, tif.asarray(key=0)) assert_array_equal(image1, tif.asarray(key=-1)) assert_array_equal(image2, tif.asarray(key=[1])) assert_array_equal(image2, tif.asarray(key=0, series=1)) assert_array_equal( image2, tif.asarray(key=0, series=tif.series[1]) ) assert_array_equal( data1, tif.asarray(key=range(2, 107)).reshape(data1.shape) ) assert_array_equal( data1, tif.asarray(key=range(105), series=2).reshape(data1.shape), ) assert_array_equal( data1, tif.asarray(key=slice(None), series=2).reshape(data1.shape), ) assert_array_equal( data1[0], tif.asarray(key=slice(107, 122)).reshape(data1[0].shape), ) assert_array_equal( data1[0].reshape(-1, 167, 439)[::2], tif.asarray(key=slice(107, 122, 2)).reshape((-1, 167, 439)), ) with pytest.raises(RuntimeError): tif.asarray(key=[0, 1]) with pytest.raises(RuntimeError): tif.asarray(key=[-3, -2]) assert_array_equal(image1, imread(fname, key=0)) assert_array_equal(image1, imread(fname, key=-1)) assert_array_equal(image2, imread(fname, key=[1])) assert_array_equal( data1, imread(fname, key=range(2, 107)).reshape(data1.shape) ) assert_array_equal( data1, imread(fname, key=range(105), series=2).reshape(data1.shape) ) assert_array_equal( data1[0], imread(fname, key=slice(107, 122)).reshape(data1[0].shape), ) @pytest.mark.skipif( SKIP_CODECS or not imagecodecs.PNG.available, reason=REASON ) def test_write_multithreaded(): """Test write large tiled multithreaded.""" data = ( numpy.arange(4001 * 6003 * 3) .astype(numpy.uint8) .reshape(4001, 6003, 3) ) with TempFileName('write_multithreaded') as fname: imwrite(fname, data, tile=(512, 512), compression='PNG', maxworkers=6) # assert_valid_tiff(fname) with TiffFile(fname) as tif: assert len(tif.pages) == 1 page = tif.pages.first assert not page.is_contiguous assert page.compression == COMPRESSION.PNG assert page.planarconfig == PLANARCONFIG.CONTIG assert page.imagewidth == 6003 assert page.imagelength == 4001 assert page.samplesperpixel == 3 image = tif.asarray(maxworkers=6) assert_array_equal(data, image) assert_aszarr_method(tif, image) assert__str__(tif) @pytest.mark.skipif(SKIP_ZARR, reason=REASON) def test_write_zarr(): """Test write to TIFF via Zarr interface.""" with TempFileName('write_zarr', ext='.ome.tif') as fname: with TiffWriter(fname, bigtiff=True) as tif: tif.write( shape=(7, 5, 252, 244), dtype=numpy.uint16, tile=(64, 64), subifds=2, ) tif.write( shape=(7, 5, 126, 122), dtype=numpy.uint16, tile=(64, 64) ) tif.write(shape=(7, 5, 63, 61), dtype=numpy.uint16, tile=(32, 32)) tif.write( shape=(3, 252, 244), dtype=numpy.uint8, photometric='RGB', planarconfig='SEPARATE', rowsperstrip=63, ) tif.write( shape=(252, 244, 3), dtype=numpy.uint8, photometric='RGB', rowsperstrip=64, ) tif.write( numpy.zeros((252, 244, 3), numpy.uint8), photometric='RGB', rowsperstrip=252, compression='zlib', ) with TiffFile(fname, mode='r+') as tif: with tif.series[0].aszarr() as store: z = zarr.open(store, mode='r+') z[0][2, 2:3, 100:111, 100:200] = 100 z[1][3, 3:4, 100:111, 100:] = 101 z[2][4, 4:5, 33:40, 41:] = 102 assert tif.asarray(series=0)[2, 2, 100, 199] == 100 assert tif.asarray(series=0, level=1)[3, 3, 100, 121] == 101 assert tif.asarray(series=0, level=2)[4, 4, 33, 41] == 102 with TiffFile(fname, mode='r+') as tif: with tif.series[1].aszarr() as store: z = zarr.open(store, mode='r+') z[1, 100:111, 100:200] = 104 assert tif.series[1].asarray()[1, 100, 199] == 104 with TiffFile(fname, mode='r+') as tif: with tif.series[2].aszarr() as store: z = zarr.open(store, mode='r+') z[200:, 20:, 1] = 105 assert tif.series[2].asarray()[251, 243, 1] == 105 with TiffFile(fname, mode='r+') as tif: with tif.series[3].aszarr() as store: z = zarr.open(store, mode='r+') with pytest.raises(PermissionError): z[100, 20] = 106 @pytest.mark.skipif(SKIP_ZARR, reason=REASON) def assert_fsspec(url, data, target_protocol='http'): """Assert fsspec ReferenceFileSystem from local http server.""" mapper = fsspec.get_mapper( 'reference://', fo=url, target_protocol=target_protocol ) zobj = zarr.open(mapper, mode='r') if isinstance(zobj, zarr.Group): assert_array_equal(zobj[0][:], data) assert_array_equal(zobj[1][:], data[:, ::2, ::2]) assert_array_equal(zobj[2][:], data[:, ::4, ::4]) else: assert_array_equal(zobj[:], data) @pytest.mark.skipif( SKIP_HTTP or SKIP_ZARR or SKIP_CODECS or not imagecodecs.JPEG.available, reason=REASON, ) @pytest.mark.parametrize('byteorder', ['<', '>']) @pytest.mark.parametrize('version', [0, 1]) def test_write_fsspec(version, byteorder): """Test write fsspec for multi-series OME-TIFF.""" from imagecodecs.numcodecs import register_codecs register_codecs('imagecodecs_jpeg', verbose=False) register_codecs('imagecodecs_delta', verbose=False) register_codecs('imagecodecs_floatpred', verbose=False) data0 = random_data(numpy.uint8, (3, 252, 244)) data1 = random_data(numpy.uint8, (219, 301, 3)) data2 = random_data(numpy.uint16, (3, 219, 301)) data3 = random_data(numpy.float32, (210, 301)) bo = {'>': 'be', '<': 'le'}[byteorder] with TempFileName( f'write_fsspec_v{version}_{bo}', ext='.ome.tif' ) as fname: filename = os.path.split(fname)[-1] with TiffWriter(fname, ome=True, byteorder=byteorder) as tif: # series 0 options = dict( tile=(64, 64), photometric=PHOTOMETRIC.MINISBLACK, compression=COMPRESSION.DEFLATE, predictor=PREDICTOR.HORIZONTAL, ) tif.write(data0, subifds=2, **options) tif.write(data0[:, ::2, ::2], subfiletype=1, **options) tif.write(data0[:, ::4, ::4], subfiletype=1, **options) # series 1 tif.write( data1, photometric=PHOTOMETRIC.RGB, rowsperstrip=data1.shape[0] ) # series 2 tif.write( data2, rowsperstrip=data1.shape[1], photometric=PHOTOMETRIC.RGB, planarconfig=PLANARCONFIG.SEPARATE, compression=COMPRESSION.DEFLATE, predictor=PREDICTOR.HORIZONTAL, ) # series 3 tif.write(data1, photometric=PHOTOMETRIC.RGB, rowsperstrip=5) # series 4 tif.write( data1, photometric=PHOTOMETRIC.RGB, tile=(32, 32), compression=COMPRESSION.JPEG, ) # series 5 tif.write( data3, rowsperstrip=105, photometric=PHOTOMETRIC.MINISBLACK, compression=COMPRESSION.DEFLATE, predictor=PREDICTOR.FLOATINGPOINT, ) with TiffFile(fname) as tif: assert tif.is_ome assert len(tif.series) == 6 # TODO: clean up temp JSON files with tif.series[0].aszarr() as store: assert store.is_multiscales store.write_fsspec( fname + f'.v{version}.s0.json', URL, version=version ) assert_array_equal(tif.series[0].asarray(), data0) assert_fsspec(URL + filename + f'.v{version}.s0.json', data0) with tif.series[1].aszarr() as store: assert not store.is_multiscales store.write_fsspec( fname + f'.v{version}.s1.json', URL, version=version ) assert_array_equal(tif.series[1].asarray(), data1) assert_fsspec(URL + filename + f'.v{version}.s1.json', data1) with tif.series[2].aszarr() as store: store.write_fsspec( fname + f'.v{version}.s2.json', URL, version=version ) assert_array_equal(tif.series[2].asarray(), data2) assert_fsspec(URL + filename + f'.v{version}.s2.json', data2) with tif.series[3].aszarr(chunkmode=2) as store: store.write_fsspec( fname + f'.v{version}.s3.json', URL, version=version ) assert_array_equal(tif.series[3].asarray(), data1) assert_fsspec(URL + filename + f'.v{version}.s3.json', data1) with tif.series[3].aszarr() as store: with pytest.raises(ValueError): # imagelength % rowsperstrip != 0 store.write_fsspec( fname + f'.v{version}.s3fail.json', URL, version=version, ) with tif.series[4].aszarr() as store: store.write_fsspec( fname + f'.v{version}.s4.json', URL, version=version ) assert_fsspec( URL + filename + f'.v{version}.s4.json', tif.series[4].asarray(), ) with tif.series[5].aszarr() as store: store.write_fsspec( fname + f'.v{version}.s5.json', URL, version=version ) assert_array_equal(tif.series[5].asarray(), data3) assert_fsspec(URL + filename + f'.v{version}.s5.json', data3) @pytest.mark.skipif(SKIP_PUBLIC or SKIP_ZARR, reason=REASON) @pytest.mark.parametrize('version', [0, 1]) @pytest.mark.parametrize('chunkmode', [0, 2]) def test_write_fsspec_multifile(version, chunkmode): """Test write fsspec for multi-file OME series.""" fname = public_file('OME/multifile/multifile-Z1.ome.tiff') url = os.path.dirname(fname).replace('\\', '/') with TempFileName( f'write_fsspec_multifile_{version}{chunkmode}', ext='.json' ) as jsonfile: # write to file handle with open(jsonfile, 'w', encoding='utf-8') as fh: with TiffFile(fname) as tif: data = tif.series[0].asarray() with tif.series[0].aszarr(chunkmode=chunkmode) as store: store.write_fsspec( fh, url=url, version=version, templatename='f' ) mapper = fsspec.get_mapper( 'reference://', fo=jsonfile, target_protocol='file', remote_protocol='file', ) zobj = zarr.open(mapper, mode='r') assert_array_equal(zobj[:], data) @pytest.mark.skipif( SKIP_PRIVATE or SKIP_LARGE or SKIP_CODECS or SKIP_ZARR, reason=REASON, ) @pytest.mark.parametrize('version', [1]) # 0, def test_write_fsspec_sequence(version): """Test write fsspec for multi-file sequence.""" # https://bbbc.broadinstitute.org/BBBC006 categories = {'p': {chr(i + 97): i for i in range(25)}} ptrn = r'(?:_(z)_(\d+)).*_(?P

[a-z])(?P\d+)(?:_(s)(\d))(?:_(w)(\d))' fnames = private_files('BBBC/BBBC006_v1_images_z_00/*.tif') fnames += private_files('BBBC/BBBC006_v1_images_z_01/*.tif') tifs = TiffSequence( fnames, imread=imagecodecs.imread, pattern=ptrn, axesorder=(1, 2, 0, 3, 4), categories=categories, ) assert len(tifs) == 3072 assert tifs.shape == (16, 24, 2, 2, 2) assert tifs.axes == 'PAZSW' data = tifs.asarray() with TempFileName( 'write_fsspec_sequence', ext=f'.v{version}.json' ) as fname: with tifs.aszarr(codec=imagecodecs.tiff_decode) as store: store.write_fsspec( fname, 'file:///' + store._commonpath.replace('\\', '/'), version=version, ) mapper = fsspec.get_mapper( 'reference://', fo=fname, target_protocol='file' ) from imagecodecs.numcodecs import register_codecs register_codecs() za = zarr.open(mapper, mode='r') assert_array_equal(za[:], data) @pytest.mark.skipif(SKIP_PUBLIC or SKIP_ZARR, reason=REASON) def test_write_tiff2fsspec(): """Test tiff2fsspec function.""" fname = public_file('tifffile/multiscene_pyramidal.ome.tif') url = os.path.dirname(fname).replace('\\', '/') data = imread(fname, series=0, level=1, maxworkers=1) with TempFileName('write_tiff2fsspec', ext='.json') as jsonfile: tiff2fsspec( fname, url, out=jsonfile, series=0, level=1, version=0, ) mapper = fsspec.get_mapper( 'reference://', fo=jsonfile, target_protocol='file', remote_protocol='file', ) zobj = zarr.open(mapper, mode='r') assert_array_equal(zobj[:], data) with pytest.raises(ValueError): tiff2fsspec( fname, url, out=jsonfile, series=0, level=1, version=0, chunkmode=CHUNKMODE.PAGE, ) @pytest.mark.skipif(SKIP_ZARR, reason=REASON) def test_write_numcodecs(): """Test write Zarr with numcodecs.Tiff.""" from tifffile import numcodecs data = numpy.arange(256 * 256 * 3, dtype=numpy.uint16).reshape(256, 256, 3) numcodecs.register_codec() compressor = numcodecs.Tiff( bigtiff=True, photometric=PHOTOMETRIC.MINISBLACK, planarconfig=PLANARCONFIG.CONTIG, compression=COMPRESSION.ADOBE_DEFLATE, compressionargs={'level': 5}, key=0, ) with TempFileName('write_numcodecs', ext='.zarr') as fname: z = zarr.open( fname, mode='w', shape=(256, 256, 3), chunks=(100, 100, 3), dtype=numpy.uint16, compressor=compressor, ) z[:] = data assert_array_equal(z[:], data) ############################################################################### # Test write ImageJ @pytest.mark.skipif(SKIP_EXTENDED, reason=REASON) @pytest.mark.parametrize( 'shape', [ (219, 301, 1), (219, 301, 2), (219, 301, 3), (219, 301, 4), (219, 301, 5), (1, 219, 301), (2, 219, 301), (3, 219, 301), (4, 219, 301), (5, 219, 301), (4, 3, 219, 301), (4, 219, 301, 3), (3, 4, 219, 301), (1, 3, 1, 219, 301), (3, 1, 1, 219, 301), (1, 3, 4, 219, 301), (3, 1, 4, 219, 301), (3, 4, 1, 219, 301), (3, 4, 1, 219, 301, 3), (2, 3, 4, 219, 301), (4, 3, 2, 219, 301, 3), ], ) @pytest.mark.parametrize( 'dtype', [numpy.uint8, numpy.uint16, numpy.int16, numpy.float32] ) @pytest.mark.parametrize('byteorder', ['>', '<']) def test_write_imagej(byteorder, dtype, shape): """Test write ImageJ format.""" # TODO: test compression and bigtiff ? dtype = numpy.dtype(dtype) if dtype != numpy.uint8 and shape[-1] in {3, 4}: pytest.xfail('ImageJ only supports uint8 RGB') data = random_data(dtype, shape) fname = 'write_imagej_{}_{}_{}'.format( {'<': 'le', '>': 'be'}[byteorder], dtype, str(shape).replace(' ', '') ) with TempFileName(fname) as fname: imwrite(fname, data, byteorder=byteorder, imagej=True) image = imread(fname) assert_array_equal(data.squeeze(), image.squeeze()) # TODO: assert_aszarr_method assert_valid_tiff(fname) def test_write_imagej_voxel_size(): """Test write ImageJ with xyz voxel size 2.6755x2.6755x3.9474 µm^3.""" data = numpy.zeros((4, 256, 256), dtype=numpy.float32) data.shape = 4, 1, 256, 256 with TempFileName('write_imagej_voxel_size') as fname: imwrite( fname, data, imagej=True, resolution=(0.373759, 0.373759), metadata={'spacing': 3.947368, 'unit': 'um'}, ) with TiffFile(fname) as tif: assert tif.is_imagej ijmeta = tif.imagej_metadata assert ijmeta is not None assert 'unit' in ijmeta assert ijmeta['unit'] == 'um' series = tif.series[0] assert series.kind == 'imagej' assert series.axes == 'ZYX' assert series.shape == (4, 256, 256) assert series.get_axes(False) == 'TZCYXS' assert series.get_shape(False) == (1, 4, 1, 256, 256, 1) assert__str__(tif) assert_valid_tiff(fname) def test_write_imagej_metadata(): """Test write additional ImageJ metadata.""" data = numpy.empty((4, 256, 256), dtype=numpy.uint16) data[:] = numpy.arange(256 * 256, dtype=numpy.uint16).reshape(1, 256, 256) with TempFileName('write_imagej_metadata') as fname: imwrite(fname, data, imagej=True, metadata={'unit': 'um'}) with TiffFile(fname) as tif: assert tif.is_imagej assert tif.imagej_metadata is not None assert 'unit' in tif.imagej_metadata assert tif.imagej_metadata['unit'] == 'um' assert__str__(tif) assert_valid_tiff(fname) @pytest.mark.parametrize('dtype', ['u1', 'u2', 'f4']) @pytest.mark.parametrize('photometric', [None, 'minisblack']) def test_write_imagej_colormap(dtype, photometric): """Test write ImageJ with colormap.""" # https://forum.image.sc/t/101788 data = numpy.empty((4, 256, 256), dtype=numpy.uint8) data[:] = numpy.arange(256 * 256, dtype=numpy.uint8).reshape(1, 256, 256) colormap = numpy.arange(3 * 256, dtype=numpy.uint16).reshape(3, 256) photometric_out = ( PHOTOMETRIC.PALETTE if dtype == 'u1' else PHOTOMETRIC.MINISBLACK ) with TempFileName(f'write_imagej_colormap_{dtype}_{photometric}') as fname: imwrite( fname, data.astype(dtype), imagej=True, colormap=colormap, photometric=photometric, ) with TiffFile(fname) as tif: assert tif.is_imagej page = tif.pages.first assert page.photometric == photometric_out assert_array_equal(page.colormap, colormap) assert__str__(tif) assert_valid_tiff(fname) @pytest.mark.skipif(SKIP_PRIVATE, reason=REASON) def test_write_imagej_ijmetadata_tag(): """Test write and read IJMetadata tag.""" fname = private_file('imagej/IJMetadata.tif') with TiffFile(fname) as tif: assert tif.is_imagej assert tif.byteorder == '>' assert len(tif.pages) == 3 assert len(tif.series) == 1 data = tif.asarray() ijmeta = tif.pages.first.tags['IJMetadata'].value assert ijmeta['Info'][:21] == 'FluorescentCells.tif\n' assert ijmeta['ROI'][:5] == b'Iout\x00' assert ijmeta['Overlays'][1][:5] == b'Iout\x00' assert ijmeta['Ranges'] == (0.0, 255.0, 0.0, 255.0, 0.0, 255.0) assert ijmeta['Labels'] == ['Red', 'Green', 'Blue'] assert ijmeta['LUTs'][2][2, 255] == 255 assert_valid_tiff(fname) with TempFileName('write_imagej_ijmetadata') as fname: with pytest.raises(TypeError): imwrite( fname, data, byteorder='>', imagej=True, metadata={'mode': 'composite'}, ijmetadata=ijmeta, ) imwrite( fname, data, byteorder='>', imagej=True, metadata={**ijmeta, 'mode': 'composite'}, ) with TiffFile(fname) as tif: assert tif.is_imagej assert tif.byteorder == '>' assert len(tif.pages) == 3 assert len(tif.series) == 1 imagej_metadata = tif.imagej_metadata data2 = tif.asarray() ijmeta2 = tif.pages.first.tags['IJMetadata'].value assert__str__(tif) assert_array_equal(data, data2) assert imagej_metadata is not None assert imagej_metadata['mode'] == 'composite' assert imagej_metadata['Info'] == ijmeta['Info'] assert ijmeta2['Info'] == ijmeta['Info'] assert ijmeta2['ROI'] == ijmeta['ROI'] assert ijmeta2['Overlays'] == ijmeta['Overlays'] assert ijmeta2['Ranges'] == ijmeta['Ranges'] assert ijmeta2['Labels'] == ijmeta['Labels'] assert_array_equal(ijmeta2['LUTs'][2], ijmeta['LUTs'][2]) assert_valid_tiff(fname) @pytest.mark.skipif(SKIP_PRIVATE, reason=REASON) def test_write_imagej_roundtrip(): """Test ImageJ metadata survive read/write roundtrip.""" fname = private_file('imagej/IJMetadata.tif') with TiffFile(fname) as tif: assert tif.is_imagej assert tif.byteorder == '>' assert len(tif.pages) == 3 assert len(tif.series) == 1 data = tif.asarray() ijmeta = tif.imagej_metadata assert ijmeta is not None assert ijmeta['Info'][:21] == 'FluorescentCells.tif\n' assert ijmeta['ROI'][:5] == b'Iout\x00' assert ijmeta['Overlays'][1][:5] == b'Iout\x00' assert ijmeta['Ranges'] == (0.0, 255.0, 0.0, 255.0, 0.0, 255.0) assert ijmeta['Labels'] == ['Red', 'Green', 'Blue'] assert ijmeta['LUTs'][2][2, 255] == 255 assert ijmeta['mode'] == 'composite' assert not ijmeta['loop'] assert ijmeta['ImageJ'] == '1.52b' assert_valid_tiff(fname) with TempFileName('write_imagej_ijmetadata_roundtrip') as fname: imwrite(fname, data, byteorder='>', imagej=True, metadata=ijmeta) with TiffFile(fname) as tif: assert tif.is_imagej assert tif.byteorder == '>' assert len(tif.pages) == 3 assert len(tif.series) == 1 ijmeta2 = tif.imagej_metadata data2 = tif.asarray() assert__str__(tif) assert_array_equal(data, data2) assert ijmeta2 is not None assert ijmeta2['ImageJ'] == ijmeta['ImageJ'] assert ijmeta2['mode'] == ijmeta['mode'] assert ijmeta2['Info'] == ijmeta['Info'] assert ijmeta2['ROI'] == ijmeta['ROI'] assert ijmeta2['Overlays'] == ijmeta['Overlays'] assert ijmeta2['Ranges'] == ijmeta['Ranges'] assert ijmeta2['Labels'] == ijmeta['Labels'] assert_array_equal(ijmeta2['LUTs'][2], ijmeta['LUTs'][2]) assert_valid_tiff(fname) @pytest.mark.parametrize('mmap', [False, True]) @pytest.mark.parametrize('truncate', [False, True]) def test_write_imagej_hyperstack(truncate, mmap): """Test write ImageJ hyperstack.""" shape = (5, 6, 7, 49, 61, 3) data = numpy.empty(shape, dtype=numpy.uint8) data[:] = numpy.arange(210, dtype=numpy.uint8).reshape(5, 6, 7, 1, 1, 1) _truncate = ['', '_trunc'][truncate] _memmap = ['', '_memmap'][mmap] with TempFileName(f'write_imagej_hyperstack{_truncate}{_memmap}') as fname: if mmap: image = memmap( fname, shape=data.shape, dtype=data.dtype, imagej=True, truncate=truncate, ) image[:] = data del image else: imwrite(fname, data, truncate=truncate, imagej=True) # assert file with TiffFile(fname) as tif: assert not tif.is_bigtiff assert not tif.is_shaped assert len(tif.pages) == 1 if truncate else 210 page = tif.pages.first assert page.is_contiguous assert page.planarconfig == PLANARCONFIG.CONTIG assert page.photometric == PHOTOMETRIC.RGB assert page.imagewidth == 61 assert page.imagelength == 49 assert page.samplesperpixel == 3 # assert series properties series = tif.series[0] assert series.is_truncated == truncate assert series.kind == 'imagej' assert series.shape == shape assert len(series._pages) == 1 assert len(series.pages) == 1 if truncate else 210 assert series.dtype == numpy.uint8 assert series.axes == 'TZCYXS' assert series.get_axes(False) == 'TZCYXS' assert series.get_shape(False) == shape # assert data image = tif.asarray(out='memmap') assert_array_equal(data.squeeze(), image.squeeze()) del image # assert iterating over series.pages data = data.reshape((210, 49, 61, 3)) for i, page in enumerate(series.pages): image = page.asarray() assert_array_equal(data[i], image) del image assert__str__(tif) assert_valid_tiff(fname) def test_write_imagej_append(): """Test write ImageJ file consecutively.""" data = numpy.empty((256, 1, 256, 256), dtype=numpy.uint8) data[:] = numpy.arange(256, dtype=numpy.uint8).reshape(-1, 1, 1, 1) with TempFileName('write_imagej_append') as fname: with TiffWriter(fname, imagej=True) as tif: for image in data: tif.write(image, contiguous=True) assert_valid_tiff(fname) # assert file with TiffFile(fname) as tif: assert not tif.is_bigtiff assert not tif.is_shaped assert len(tif.pages) == 256 page = tif.pages.first assert page.is_contiguous assert page.planarconfig == PLANARCONFIG.CONTIG assert page.photometric != PHOTOMETRIC.RGB assert page.imagewidth == 256 assert page.imagelength == 256 assert page.samplesperpixel == 1 # assert series properties series = tif.series[0] assert series.kind == 'imagej' assert series.shape == (256, 256, 256) assert series.dtype == numpy.uint8 assert series.axes == 'ZYX' assert series.get_axes(False) == 'TZCYXS' assert series.get_shape(False) == (1, 256, 1, 256, 256, 1) # assert data image = tif.asarray(out='memmap') assert_array_equal(data.squeeze(), image) del image assert__str__(tif) def test_write_imagej_bigtiff(): """Test write ImageJ BigTIFF with warning.""" with TempFileName('write_imagej_bigtiff') as fname: with pytest.warns(UserWarning): imwrite( fname, shape=(31, 33), dtype=numpy.float32, bigtiff=True, imagej=True, metadata=None, ) @pytest.mark.skipif(SKIP_LARGE, reason=REASON) def test_write_imagej_raw(): """Test write ImageJ 5 GB raw file.""" data = numpy.empty((1280, 1, 1024, 1024), dtype=numpy.float32) data[:] = numpy.arange(1280, dtype=numpy.float32).reshape(-1, 1, 1, 1) with TempFileName('write_imagej_big') as fname: with pytest.warns(UserWarning): # UserWarning: truncating ImageJ file imwrite(fname, data, imagej=True) assert_valid_tiff(fname) # assert file with TiffFile(fname) as tif: assert not tif.is_bigtiff assert not tif.is_shaped assert len(tif.pages) == 1 page = tif.pages.first assert page.is_contiguous assert page.planarconfig == PLANARCONFIG.CONTIG assert page.photometric != PHOTOMETRIC.RGB assert page.imagewidth == 1024 assert page.imagelength == 1024 assert page.samplesperpixel == 1 # assert series properties series = tif.series[0] assert series.kind == 'imagej' assert len(series._pages) == 1 assert len(series.pages) == 1 assert series.shape == (1280, 1024, 1024) assert series.dtype == numpy.float32 assert series.axes == 'ZYX' assert series.get_axes(False) == 'TZCYXS' assert series.get_shape(False) == (1, 1280, 1, 1024, 1024, 1) # assert data image = tif.asarray(out='memmap') assert_array_equal(data.squeeze(), image.squeeze()) del image if not SKIP_ZARR: store = imread(fname, aszarr=True) try: z = zarr.open(store, mode='r') chunk = z[333:356, 99:513, 31:127] finally: store.close() assert_array_equal(chunk, data[333:356, 0, 99:513, 31:127]) store = imread(fname, aszarr=True, chunkmode=2) try: z = zarr.open(store, mode='r') chunk = z[333:356, 99:513, 31:127] finally: store.close() assert_array_equal(chunk, data[333:356, 0, 99:513, 31:127]) assert__str__(tif) @pytest.mark.skipif(SKIP_EXTENDED, reason=REASON) @pytest.mark.parametrize( 'shape, axes', [ ((219, 301, 1), None), ((219, 301, 2), None), ((219, 301, 3), None), ((219, 301, 4), None), ((219, 301, 5), None), ((1, 219, 301), None), ((2, 219, 301), None), ((3, 219, 301), None), ((4, 219, 301), None), ((5, 219, 301), None), ((4, 3, 219, 301), None), ((4, 219, 301, 3), None), ((3, 4, 219, 301), None), ((1, 3, 1, 219, 301), None), ((3, 1, 1, 219, 301), None), ((1, 3, 4, 219, 301), None), ((3, 1, 4, 219, 301), None), ((3, 4, 1, 219, 301), None), ((3, 4, 1, 219, 301, 3), None), ((2, 3, 4, 219, 301), None), ((4, 3, 2, 219, 301, 3), None), ((3, 33, 31), 'CYX'), ((33, 31, 3), 'YXC'), ((5, 1, 33, 31), 'CSYX'), ((5, 1, 33, 31), 'ZCYX'), ((2, 3, 4, 219, 301, 3), 'TCZYXS'), ((10, 5, 63, 65), 'EPYX'), ((2, 3, 4, 5, 6, 7, 33, 31, 3), 'TQCPZRYXS'), ], ) def test_write_ome(shape, axes): """Test write OME-TIFF format.""" photometric = None planarconfig = None if shape[-1] in {3, 4}: photometric = PHOTOMETRIC.RGB planarconfig = PLANARCONFIG.CONTIG elif shape[-3] in {3, 4}: photometric = PHOTOMETRIC.RGB planarconfig = PLANARCONFIG.SEPARATE metadata = {'axes': axes} if axes is not None else None data = random_data(numpy.uint8, shape) fname = 'write_ome_{}.ome'.format(str(shape).replace(' ', '')) with TempFileName(fname) as fname: imwrite( fname, data, metadata=metadata, photometric=photometric, planarconfig=planarconfig, ) with TiffFile(fname) as tif: assert tif.is_ome assert not tif.is_shaped assert tif.series[0].kind == 'ome' image = tif.asarray() omexml = tif.ome_metadata if axes: if axes == 'CYX': axes = 'SYX' elif axes == 'YXC': axes = 'YXS' assert tif.series[0].axes == squeeze_axes(shape, axes)[1] assert_array_equal(data.squeeze(), image.squeeze()) assert_aszarr_method(tif, image) assert_valid_omexml(omexml) assert_valid_tiff(fname) def test_write_ome_enable(): """Test OME-TIFF enabling.""" data = numpy.zeros((32, 32), dtype=numpy.uint8) with TempFileName('write_ome_enable.ome') as fname: imwrite(fname, data) with TiffFile(fname) as tif: assert tif.is_ome imwrite(fname, data, description='not OME') with TiffFile(fname) as tif: assert not tif.is_ome with pytest.warns(UserWarning): imwrite(fname, data, description='not OME', ome=True) with TiffFile(fname) as tif: assert tif.is_ome imwrite(fname, data, imagej=True) with TiffFile(fname) as tif: assert not tif.is_ome assert tif.is_imagej imwrite(fname, data, imagej=True, ome=True) with TiffFile(fname) as tif: assert tif.is_ome assert not tif.is_imagej with TempFileName('write_ome_auto.tif') as fname: imwrite(fname, data) with TiffFile(fname) as tif: assert not tif.is_ome imwrite(fname, data, ome=True) with TiffFile(fname) as tif: assert tif.is_ome @pytest.mark.skipif(SKIP_PUBLIC, reason=REASON) @pytest.mark.parametrize( 'method', ['manual', 'copy', 'iter', 'compression', 'xml', 'dask'] ) def test_write_ome_methods(method): """Test re-write OME-TIFF.""" # 4D (7 time points, 5 focal planes) if method == 'dask' and SKIP_DASK: pytest.skip('skip dask') fname = public_file('OME/bioformats-artificial/4D-series.ome.tiff') with TiffFile(fname) as tif: series = tif.series[0] data = series.asarray() dtype = data.dtype shape = data.shape axes = series.axes omexml = tif.ome_metadata if method == 'dask': store = series.aszarr() darr = dask.array.from_zarr(store) def pages(): yield from data.reshape(-1, *data.shape[-2:]) with TempFileName(f'write_ome_{method}.ome') as fname: if method == 'xml': # use original XML metadata metadata = xml2dict(omexml) metadata['axes'] = axes imwrite( fname, data, byteorder='>', photometric=PHOTOMETRIC.MINISBLACK, metadata=metadata, ) elif method == 'manual': # manually write omexml to first page and data to individual pages # process OME-XML omexml = omexml.replace( '4D-series.ome.tiff', os.path.split(fname)[-1] ) # omexml = omexml.replace('BigEndian="true"', 'BigEndian="false"') data = data.view(data.dtype.newbyteorder('>')) # save image planes in the order referenced in the OME-XML # make sure storage options (compression, byteorder, photometric) # match OME-XML # write OME-XML to first page only with TiffWriter(fname, byteorder='>') as tif: for i, image in enumerate(pages()): description = omexml if i == 0 else None tif.write( image, description=description, photometric=PHOTOMETRIC.MINISBLACK, metadata=None, contiguous=False, ) elif method == 'iter': # use iterator over individual pages imwrite( fname, pages(), shape=shape, dtype=dtype, byteorder='>', photometric=PHOTOMETRIC.MINISBLACK, metadata={'axes': axes}, ) elif method == 'compression': # use iterator with compression imwrite( fname, pages(), shape=shape, dtype=dtype, byteorder='>', photometric=PHOTOMETRIC.MINISBLACK, compression=COMPRESSION.ADOBE_DEFLATE, metadata={'axes': axes}, ) elif method == 'copy': # use one numpy array imwrite( fname, data, byteorder='>', photometric=PHOTOMETRIC.MINISBLACK, metadata={'axes': axes}, ) elif method == 'dask': # dask array imwrite( fname, darr, byteorder='>', photometric=PHOTOMETRIC.MINISBLACK, compression=COMPRESSION.ADOBE_DEFLATE, metadata={'axes': axes}, ) del darr store.close() with TiffFile(fname) as tif: assert tif.is_ome assert tif.byteorder == '>' assert len(tif.pages) == 35 assert len(tif.series) == 1 # assert page properties page = tif.pages.first if method not in {'compression', 'dask'}: assert page.is_contiguous assert page.compression == COMPRESSION.NONE assert page.imagewidth == 439 assert page.imagelength == 167 assert page.bitspersample == 8 assert page.samplesperpixel == 1 # assert series properties series = tif.series[0] assert series.kind == 'ome' assert series.shape == (7, 5, 167, 439) assert series.dtype == numpy.int8 assert series.axes == 'TZYX' # assert data assert_array_equal(data, tif.asarray()) assert_valid_omexml(tif.ome_metadata) assert__str__(tif) assert_valid_tiff(fname) @pytest.mark.parametrize('contiguous', [True, False]) def test_write_ome_manual(contiguous): """Test write OME-TIFF manually.""" data = numpy.random.randint(0, 255, (19, 31, 21), numpy.uint8) with TempFileName(f'write_ome__manual{int(contiguous)}.ome') as fname: with TiffWriter(fname) as tif: # successively write image data to TIFF pages # disable tifffile from writing any metadata # add empty ImageDescription tag to first page for i, frame in enumerate(data): tif.write( frame, contiguous=contiguous, metadata=None, description=None if i else b'', ) # update ImageDescription tag with custom OME-XML xml = OmeXml() xml.addimage( numpy.uint8, (16, 31, 21), (16, 1, 1, 31, 21, 1), axes='ZYX' ) xml.addimage( numpy.uint8, (3, 31, 21), (3, 1, 1, 31, 21, 1), axes='CYX' ) tif.overwrite_description(xml.tostring()) with TiffFile(fname) as tif: assert tif.is_ome assert len(tif.pages) == 19 assert len(tif.series) == 2 # assert series properties series = tif.series[0] assert series.kind == 'ome' assert series.axes == 'ZYX' assert bool(series.dataoffset) == contiguous assert_array_equal(data[:16], series.asarray()) series = tif.series[1] assert series.kind == 'ome' assert series.axes == 'CYX' assert bool(series.dataoffset) == contiguous assert_array_equal(data[16:], series.asarray()) # assert_valid_omexml(tif.ome_metadata) assert__str__(tif) assert_valid_tiff(fname) @pytest.mark.skipif( SKIP_PUBLIC or SKIP_CODECS or not imagecodecs.JPEG.available, reason=REASON, ) def test_rewrite_ome(): """Test rewrite multi-series, pyramidal OME-TIFF.""" # https://github.com/cgohlke/tifffile/issues/156 # - data loss in case of JPEG recompression; copy tiles verbatim # - OME metadata not copied; use ometypes library after writing # - tifffile does not support multi-file OME-TIFF writing fname = public_file('tifffile/multiscene_pyramidal.ome.tif') with TiffFile(fname) as tif: assert tif.is_ome with TempFileName('rewrite_ome', ext='.ome.tif') as fname_copy: with TiffWriter( fname_copy, bigtiff=tif.is_bigtiff, byteorder=tif.byteorder, ome=tif.is_ome, ) as copy: for series in tif.series: subifds = len(series.levels) - 1 metadata = {'axes': series.axes} for level in series.levels: keyframe = level.keyframe copy.write( level.asarray(), planarconfig=keyframe.planarconfig, photometric=keyframe.photometric, extrasamples=keyframe.extrasamples, tile=keyframe.tile, rowsperstrip=keyframe.rowsperstrip, compression=keyframe.compression, predictor=keyframe.predictor, subsampling=keyframe.subsampling, datetime=keyframe.datetime, resolution=keyframe.resolution, resolutionunit=keyframe.resolutionunit, subfiletype=keyframe.subfiletype, colormap=keyframe.colormap, iccprofile=keyframe.iccprofile, subifds=subifds, metadata=metadata, ) subifds = None metadata = None # verify with TiffFile(fname_copy) as copy: assert copy.byteorder == tif.byteorder assert copy.is_bigtiff == tif.is_bigtiff assert copy.is_imagej == tif.is_imagej assert copy.is_ome == tif.is_ome assert len(tif.series) == len(copy.series) for series, series_copy in zip(tif.series, copy.series): assert len(series.levels) == len(series_copy.levels) metadata = {'axes': series.axes} for level, level_copy in zip( series.levels, series_copy.levels ): assert len(level.pages) == len(level_copy.pages) assert level.shape == level_copy.shape assert level.dtype == level_copy.dtype assert level.keyframe.hash == level_copy.keyframe.hash @pytest.mark.skipif( SKIP_PRIVATE or SKIP_CODECS or SKIP_LARGE or not imagecodecs.JPEG.available, reason=REASON, ) def test_write_ome_copy(): """Test write pyramidal OME-TIFF by copying compressed tiles from SVS.""" def tiles(page): # return iterator over compressed tiles in page assert page.is_tiled fh = page.parent.filehandle for offset, bytecount in zip(page.dataoffsets, page.databytecounts): fh.seek(offset) yield fh.read(bytecount) with TiffFile(private_file('AperioSVS/CMU-1.svs')) as svs: assert svs.is_svs levels = svs.series[0].levels with TempFileName('write_ome_copy', ext='.ome.tif') as fname: with TiffWriter(fname, ome=True, bigtiff=True) as tif: level = levels[0] assert len(level.pages) == 1 page = level.pages[0] if page.compression == 7: # override default that RGB will be compressed as YCBCR compressionargs = {'outcolorspace': page.photometric} else: compressionargs = {} extratags = ( # copy some extra tags page.tags.get('ImageDepth').astuple(), # this is ignored page.tags.get('InterColorProfile').astuple(), ) tif.write( tiles(page), shape=page.shape, dtype=page.dtype, tile=page.tile, datetime=page.datetime, photometric=page.photometric, planarconfig=page.planarconfig, compression=page.compression, compressionargs=compressionargs, jpegtables=page.jpegtables, iccprofile=page.iccprofile, subsampling=page.subsampling, subifds=len(levels) - 1, extratags=extratags, ) for level in levels[1:]: assert len(level.pages) == 1 page = level.pages[0] if page.compression == 7: compressionargs = {'outcolorspace': page.photometric} else: compressionargs = {} tif.write( tiles(page), shape=page.shape, dtype=page.dtype, tile=page.tile, datetime=page.datetime, photometric=page.photometric, planarconfig=page.planarconfig, compression=page.compression, compressionargs=compressionargs, jpegtables=page.jpegtables, iccprofile=page.iccprofile, subsampling=page.subsampling, subfiletype=1, ) with TiffFile(fname) as tif: assert tif.is_ome assert len(tif.pages) == 1 assert len(tif.pages.first.pages) == 2 assert 'InterColorProfile' in tif.pages.first.tags assert 'ImageDepth' not in tif.pages.first.tags assert tif.series[0].kind == 'ome' levels_ = tif.series[0].levels assert len(levels_) == len(levels) for level, level_ in zip(levels[1:], levels_[1:]): assert level.shape == level_.shape assert level.dtype == level_.dtype assert_array_equal(level.asarray(), level_.asarray()) @pytest.mark.skipif( SKIP_PRIVATE or SKIP_CODECS or not imagecodecs.JPEG.available, reason=REASON, ) def test_write_geotiff_copy(): """Test write a copy of striped, compressed GeoTIFF.""" def strips(page): # return iterator over compressed strips in page assert not page.is_tiled fh = page.parent.filehandle for offset, bytecount in zip(page.dataoffsets, page.databytecounts): fh.seek(offset) yield fh.read(bytecount) with TiffFile(private_file('GeoTIFF/ML_30m.tif')) as geotiff: assert geotiff.is_geotiff assert len(geotiff.pages) == 1 with TempFileName('write_geotiff_copy') as fname: with TiffWriter( fname, byteorder=geotiff.byteorder, bigtiff=geotiff.is_bigtiff ) as tif: page = geotiff.pages[0] tags = page.tags extratags = ( tags.get('ModelPixelScaleTag').astuple(), tags.get('ModelTiepointTag').astuple(), tags.get('GeoKeyDirectoryTag').astuple(), tags.get('GeoAsciiParamsTag').astuple(), tags.get('GDAL_NODATA').astuple(), ) tif.write( strips(page), shape=page.shape, dtype=page.dtype, rowsperstrip=page.rowsperstrip, photometric=page.photometric, planarconfig=page.planarconfig, compression=page.compression, predictor=page.predictor, jpegtables=page.jpegtables, iccprofile=page.iccprofile, subsampling=page.subsampling, extratags=extratags, ) with TiffFile(fname) as tif: assert tif.is_geotiff assert len(tif.pages) == 1 page = tif.pages.first assert page.nodata == -32767 assert page.tags['ModelPixelScaleTag'].value == ( 30.0, 30.0, 0.0, ) assert page.tags['ModelTiepointTag'].value == ( 0.0, 0.0, 0.0, 1769487.0, 5439473.0, 0.0, ) assert tif.geotiff_metadata['GeogAngularUnitsGeoKey'] == 9102 assert_array_equal(tif.asarray(), geotiff.asarray()) @pytest.mark.parametrize('ext', ['', 'b']) def test_write_open_mode(ext): """Test TiffWriter with file open modes.""" data = numpy.random.randint(0, 255, (5, 31, 21), numpy.uint8) with TempFileName('write_open_mode' + ext) as fname: # write imwrite(fname, data, mode='w' + ext) assert_array_equal(imread(fname), data) # exclusive creation with pytest.raises(FileExistsError): imwrite(fname, data, mode='x' + ext) os.remove(fname) imwrite(fname, data, mode='x' + ext) assert_array_equal(imread(fname), data) # append imwrite(fname, data, mode='r+' + ext) with TiffFile(fname) as tif: assert len(tif.pages) == 10 assert len(tif.series) == 2 assert_array_equal(tif.series[0].asarray(), data) assert_array_equal(tif.series[1].asarray(), data) # write to file handle with FileHandle(fname, mode='w' + ext) as fh: imwrite(fh, data, mode='ignored') assert_array_equal(imread(fname), data) # exclusive creation with file handle with pytest.raises(FileExistsError): with FileHandle(fname, mode='x' + ext) as fh: pass os.remove(fname) with FileHandle(fname, mode='x' + ext) as fh: imwrite(fh, data, mode='ignored' + ext) assert not fh.closed fh.seek(0) with pytest.raises(OSError): # io.UnsupportedOperation fh.read(8) assert_array_equal(imread(fname), data) # append to file handle with FileHandle(fname, mode='r+' + ext) as fh: imwrite(fh, data, mode='ignored') assert not fh.closed fh.seek(0) with TiffFile(fh) as tif: assert len(tif.pages) == 10 assert len(tif.series) == 2 assert_array_equal(tif.series[0].asarray(), data) assert_array_equal(tif.series[1].asarray(), data) with pytest.raises(ValueError): imwrite(fname, data, mode=ext) with pytest.raises(ValueError): imwrite(fname, data, mode='w' + ext, append=True) @pytest.mark.skipif(SKIP_ZARR or SKIP_DASK or SKIP_CODECS, reason=REASON) @pytest.mark.parametrize('atype', ['numpy', 'dask', 'zarr']) @pytest.mark.parametrize('compression', [None, 'zlib']) @pytest.mark.parametrize('tiled', [None, False, True]) @pytest.mark.parametrize('byteorder', ['<', '>']) def test_write_array_types(atype, byteorder, tiled, compression): """Test write array types.""" kwargs = {} fname = f'write_{atype}_' if tiled is None: pass elif tiled: kwargs['tile'] = (64, 64) fname += 'tiled_' else: kwargs['rowsperstrip'] = 64 fname += 'striped_' if compression is not None: kwargs['compression'] = compression fname += compression fname += 'be' if byteorder == '>' else 'le' data = numpy.random.randint(0, 1024, (3, 361, 254, 3), numpy.uint16) if atype == 'numpy': arr = data.copy() elif atype == 'dask': arr = dask.array.from_array(data, chunks=(1, 32, 64, 3)) elif atype == 'zarr': arr = zarr.zeros(data.shape, chunks=(1, 32, 64, 3), dtype=data.dtype) arr[:] = data with TempFileName(fname) as fname: imwrite( fname, arr[1:, 33 : 33 + 261, 31 : 31 + 200], byteorder=byteorder, **kwargs, ) assert_array_equal( imread(fname), data[1:, 33 : 33 + 261, 31 : 31 + 200] ) assert_array_equal( imagecodecs.imread(fname, index=None), data[1:, 33 : 33 + 261, 31 : 31 + 200], ) ############################################################################### # Test embedded TIFF files EMBED_NAME = public_file('tifffile/test_FileHandle.bin') EMBED_OFFSET = 7077 EMBED_SIZE = 5744 EMBED_OFFSET1 = 13820 EMBED_SIZE1 = 7936382 def assert_embed_tif(tif): """Assert embedded TIFF file.""" # 4 series in 6 pages assert tif.byteorder == '<' assert len(tif.pages) == 6 assert len(tif.series) == 4 # assert series 0 properties series = tif.series[0] assert series.shape == (3, 20, 20) assert series.dtype == numpy.uint8 assert series.axes == 'IYX' assert series.kind == 'generic' page = series.pages[0] assert page.compression == COMPRESSION.LZW assert page.imagewidth == 20 assert page.imagelength == 20 assert page.bitspersample == 8 assert page.samplesperpixel == 1 data = tif.asarray(series=0) assert isinstance(data, numpy.ndarray) assert data.shape == (3, 20, 20) assert data.dtype == numpy.uint8 assert tuple(data[:, 9, 9]) == (19, 90, 206) # assert series 1 properties series = tif.series[1] assert series.shape == (10, 10, 3) assert series.dtype == numpy.float32 assert series.axes == 'YXS' assert series.kind == 'generic' page = series.pages[0] assert page.photometric == PHOTOMETRIC.RGB assert page.compression == COMPRESSION.LZW assert page.imagewidth == 10 assert page.imagelength == 10 assert page.bitspersample == 32 assert page.samplesperpixel == 3 data = tif.asarray(series=1) assert isinstance(data, numpy.ndarray) assert data.shape == (10, 10, 3) assert data.dtype == numpy.float32 assert round(abs(data[9, 9, 1] - 214.5733642578125), 7) == 0 # assert series 2 properties series = tif.series[2] assert series.shape == (20, 20, 3) assert series.dtype == numpy.uint8 assert series.axes == 'YXS' assert series.kind == 'generic' page = series.pages[0] assert page.photometric == PHOTOMETRIC.RGB assert page.compression == COMPRESSION.LZW assert page.imagewidth == 20 assert page.imagelength == 20 assert page.bitspersample == 8 assert page.samplesperpixel == 3 data = tif.asarray(series=2) assert isinstance(data, numpy.ndarray) assert data.shape == (20, 20, 3) assert data.dtype == numpy.uint8 assert tuple(data[9, 9, :]) == (19, 90, 206) # assert series 3 properties series = tif.series[3] assert series.shape == (10, 10) assert series.dtype == numpy.float32 assert series.axes == 'YX' assert series.kind == 'generic' page = series.pages[0] assert page.compression == COMPRESSION.LZW assert page.imagewidth == 10 assert page.imagelength == 10 assert page.bitspersample == 32 assert page.samplesperpixel == 1 data = tif.asarray(series=3) assert isinstance(data, numpy.ndarray) assert data.shape == (10, 10) assert data.dtype == numpy.float32 assert round(abs(data[9, 9] - 223.1648712158203), 7) == 0 assert__str__(tif) def assert_embed_micromanager(tif): """Assert embedded MicroManager TIFF file.""" assert tif.is_ome assert tif.is_imagej assert tif.is_micromanager assert tif.byteorder == '<' assert len(tif.pages) == 15 assert len(tif.series) == 1 # assert non-tiff micromanager_metadata assert tif.micromanager_metadata is not None tags = tif.micromanager_metadata['Summary'] assert tags['MicroManagerVersion'] == '1.4.x dev' assert tags['UserName'] == 'trurl' # assert page properties page = tif.pages.first assert page.is_contiguous assert page.compression == COMPRESSION.NONE assert page.imagewidth == 512 assert page.imagelength == 512 assert page.bitspersample == 16 assert page.samplesperpixel == 1 # two description tags assert page.description.startswith('[a-z])(?P\d+)(?:_(s)(\d))(?:_(w)(\d))' fnames = private_files('BBBC/BBBC006_v1_images_z_00/*.tif') fnames += private_files('BBBC/BBBC006_v1_images_z_01/*.tif') tifs = TiffSequence( fnames, pattern=ptrn, categories=categories, axesorder=(1, 2, 0, 3, 4) ) assert len(tifs) == 3072 assert tifs.shape == (16, 24, 2, 2, 2) assert tifs.axes == 'PAZSW' data = tifs.asarray() assert isinstance(data, numpy.ndarray) assert data.flags['C_CONTIGUOUS'] assert data.shape == (16, 24, 2, 2, 2, 520, 696) assert data.dtype == numpy.uint16 assert data[8, 12, 1, 0, 1, 256, 519] == 1579 if not SKIP_ZARR: with tifs.aszarr() as store: assert_array_equal(data, zarr.open(store, mode='r')) @pytest.mark.skipif(SKIP_PRIVATE or SKIP_LARGE, reason=REASON) @pytest.mark.parametrize('tiled', [False, True]) def test_sequence_tiled(tiled): """Test FileSequence with tiled OME-TIFFs.""" # Dataset from https://github.com/tlambert03/tifffolder/issues/2 ptrn = re.compile( r'\[(?P\d+) x (?P\d+)\].*(C)(\d+).*(Z)(\d+)', re.IGNORECASE ) fnames = private_file('TiffSequenceTiled/*.tif') tifs = TiffSequence(fnames, pattern=ptrn) assert len(tifs) == 60 assert tifs.shape == (2, 3, 2, 5) assert tifs.axes == 'UVCZ' tiled = {0: 0, 1: 1} if tiled else None data = tifs.asarray(axestiled=tiled, is_ome=False) assert isinstance(data, numpy.ndarray) assert data.flags['C_CONTIGUOUS'] assert data.dtype == numpy.uint16 if tiled: assert data.shape == (2, 5, 2 * 2560, 3 * 2160) assert data[1, 3, 2560 + 1024, 2 * 2160 + 1024] == 596 else: assert data.shape == (2, 3, 2, 5, 2560, 2160) assert data[1, 2, 1, 3, 1024, 1024] == 596 if not SKIP_ZARR: with tifs.aszarr(axestiled=tiled, is_ome=False) as store: if tiled: assert_array_equal( data[1, 3, 2048:3072], zarr.open(store, mode='r')[1, 3, 2048:3072], ) else: assert_array_equal( data[1, 2, 1, 3:5], zarr.open(store, mode='r')[1, 2, 1, 3:5], ) @pytest.mark.skipif(SKIP_PRIVATE or SKIP_CODECS, reason=REASON) def test_sequence_imread(): """Test TiffSequence with imagecodecs.imread.""" fname = private_files('PNG/*.png') pngs = TiffSequence(fname, imread=imagecodecs.imread) assert len(pngs) == 4 assert pngs.shape == (4,) assert pngs.axes == 'I' data = pngs.asarray(codec=imagecodecs.png_decode) assert data.flags['C_CONTIGUOUS'] assert data.shape == (4, 200, 200) assert data.dtype == numpy.uint16 if not SKIP_ZARR: with pngs.aszarr(codec=imagecodecs.png_decode) as store: assert_array_equal(data, zarr.open(store, mode='r')) del data @pytest.mark.skipif(SKIP_PRIVATE or SKIP_CODECS, reason=REASON) def test_sequence_imread_glob(): """Test imread function with glob pattern.""" fname = private_files('TiffSequence/*.tif') data = imread( fname, imreadargs={'key': 0}, chunkshape=(480, 640), chunkdtype=numpy.uint8, ) assert data.shape == (10, 480, 640) if not SKIP_ZARR: store = imread( fname, aszarr=True, imreadargs={'key': 0}, chunkshape=(480, 640), chunkdtype=numpy.uint8, ) try: assert_array_equal(data, zarr.open(store, mode='r')) finally: store.close() @pytest.mark.skipif(SKIP_PRIVATE or SKIP_CODECS, reason=REASON) def test_sequence_imread_one(): """Test imread function with one item in sequence.""" fname = private_file('TiffSequence/AT3_1m4_01.tif') assert_array_equal(imread([fname]), imread(fname)) ############################################################################### # Test packages depending on tifffile @pytest.mark.skipif(SKIP_PRIVATE, reason=REASON) def test_dependent_roifile(): """Test roifile.ImagejRoi class.""" from roifile import ImagejRoi for roi in ImagejRoi.fromfile(private_file('imagej/IJMetadata.tif')): assert roi == ImagejRoi.frombytes(roi.tobytes()) roi.coordinates() str(roi) @pytest.mark.skipif(SKIP_PRIVATE, reason=REASON) def test_dependent_lfdfiles(): """Test lfdfiles conversion to TIFF.""" from lfdfiles import LfdFileSequence, SimfcsInt, SimfcsZ64 filename = private_file('SimFCS/simfcs.Z64') with TempFileName('depend_simfcsz_z64', ext='.tif') as outfile: with SimfcsZ64(filename) as z64: data = z64.asarray() z64.totiff(outfile) with TiffFile(outfile) as tif: assert len(tif.pages) == 256 assert len(tif.series) == 1 assert tif.series[0].shape == (256, 256, 256) assert tif.series[0].dtype == numpy.float32 assert_array_equal(data, tif.asarray()) filename = private_file('SimFCS/gpint') with LfdFileSequence( filename + '/v*001.int', pattern=r'v(?P\d)(?P\d*).int', imread=SimfcsInt, ) as ims: assert ims.axes == 'CI' assert ims.asarray().shape == (2, 1, 256, 256) @pytest.mark.skipif(SKIP_PRIVATE, reason=REASON) def test_dependent_cmapfile(): """Test cmapfile.lsm2cmap.""" from cmapfile import CmapFile, lsm2cmap filename = private_file('LSM/3d_zfish_onephoton_zoom.lsm') data = imread(filename) with TempFileName('depend_cmapfile', ext='.cmap') as cmapfile: lsm2cmap(filename, cmapfile, step=(1.0, 1.0, 20.0)) fname = os.path.join( os.path.split(cmapfile)[0], cmapfile.replace('.cmap', '.ch0000.cmap'), ) with CmapFile(fname, mode='r') as cmap: assert_array_equal( cmap['map00000']['data00000'], data.squeeze()[:, 0, :, :] ) @pytest.mark.skipif(SKIP_PRIVATE, reason=REASON) def test_dependent_czifile(): """Test czifile.CziFile.""" # TODO: test LZW compressed czi file from czifile import CziFile fname = private_file('czi/pollen.czi') # with pytest.warns(DeprecationWarning): with CziFile(fname) as czi: assert czi.shape == (1, 1, 104, 365, 364, 1) assert czi.axes == 'TCZYX0' # verify data data = czi.asarray() assert data.flags['C_CONTIGUOUS'] assert data.shape == (1, 1, 104, 365, 364, 1) assert data.dtype == numpy.uint8 assert data[0, 0, 52, 182, 182, 0] == 10 @pytest.mark.skipif(SKIP_PRIVATE or SKIP_LARGE, reason=REASON) def test_dependent_czi2tif(): """Test czifile.czi2tif.""" from czifile.czifile import CziFile, czi2tif fname = private_file('CZI/pollen.czi') # with pytest.warns(DeprecationWarning): with CziFile(fname) as czi: metadata = czi.metadata() data = czi.asarray().squeeze() with TempFileName('depend_czi2tif') as tif: czi2tif(fname, tif, bigtiff=False) with TiffFile(tif) as t: im = t.asarray() assert t.pages[0].description == metadata assert_array_equal(im, data) del im del data assert_valid_tiff(tif) @pytest.mark.skipif(SKIP_PRIVATE or SKIP_LARGE, reason=REASON) def test_dependent_czi2tif_airy(): """Test czifile.czi2tif with AiryScan.""" from czifile.czifile import czi2tif fname = private_file('CZI/AiryscanSRChannel.czi') with TempFileName('depend_czi2tif_airy') as tif: # with pytest.warns(DeprecationWarning): czi2tif(fname, tif, verbose=True, truncate=True, bigtiff=False) im = memmap(tif) assert im.shape == (32, 6, 1680, 1680) assert tuple(im[17, :, 1500, 1000]) == (95, 109, 3597, 0, 0, 0) del im assert_valid_tiff(tif) @pytest.mark.skipif(SKIP_PRIVATE, reason=REASON) def test_dependent_oiffile(): """Test oiffile.OifFile.""" from oiffile import OifFile fname = private_file( 'oib/MB231cell1_paxgfp_PDMSgasket_PMMAflat_30nm_378sli.oib' ) with OifFile(fname) as oib: assert oib.is_oib tifs = oib.series[0] assert len(tifs) == 756 assert tifs.shape == (2, 378) assert tifs.axes == 'CZ' # verify data data = tifs.asarray(out='memmap') assert data.flags['C_CONTIGUOUS'] assert data.shape == (2, 378, 256, 256) assert data.dtype == numpy.dtype(' None: """Set __module__ attribute for all public objects.""" globs = globals() for item in __all__: obj = globs[item] if hasattr(obj, '__module__'): obj.__module__ = 'tifffile' main.__module__ = 'tifffile' _set_module() ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1577845896.0 tifffile-2025.3.30/tifffile/__main__.py0000666000000000000000000000020713603002210014414 0ustar00# tifffile/__main__.py """Tifffile package command line script.""" import sys from .tifffile import main sys.exit(main()) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1726764878.0 tifffile-2025.3.30/tifffile/_imagecodecs.py0000666000000000000000000003016314673053516015331 0ustar00# tifffile/_imagecodecs.py # Copyright (c) 2008-2024, Christoph Gohlke # All rights reserved. # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions are met: # # 1. Redistributions of source code must retain the above copyright notice, # this list of conditions and the following disclaimer. # # 2. Redistributions in binary form must reproduce the above copyright notice, # this list of conditions and the following disclaimer in the documentation # and/or other materials provided with the distribution. # # 3. Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived from # this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" # AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE # IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE # ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE # LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR # CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF # SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS # INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN # CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) # ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE # POSSIBILITY OF SUCH DAMAGE. """Fallback imagecodecs codecs. This module provides alternative, pure Python and NumPy implementations of some functions of the `imagecodecs`_ package. The functions may raise `NotImplementedError`. .. _imagecodecs: https://github.com/cgohlke/imagecodecs """ from __future__ import annotations __all__ = [ 'bitorder_decode', 'delta_decode', 'delta_encode', 'float24_decode', 'lzma_decode', 'lzma_encode', 'packbits_decode', 'packints_decode', 'packints_encode', 'zlib_decode', 'zlib_encode', ] from typing import TYPE_CHECKING, overload import numpy if TYPE_CHECKING: from typing import Any, Literal from numpy.typing import DTypeLike, NDArray try: import lzma def lzma_encode( data: bytes | NDArray[Any], /, level: int | None = None, *, out: Any = None, ) -> bytes: """Compress LZMA.""" if isinstance(data, numpy.ndarray): data = data.tobytes() return lzma.compress(data) def lzma_decode(data: bytes, /, *, out: Any = None) -> bytes: """Decompress LZMA.""" return lzma.decompress(data) except ImportError: # Python was built without lzma def lzma_encode( data: bytes | NDArray[Any], /, level: int | None = None, *, out: Any = None, ) -> bytes: """Raise ImportError.""" import lzma # noqa return b'' def lzma_decode(data: bytes, /, *, out: Any = None) -> bytes: """Raise ImportError.""" import lzma # noqa return b'' try: import zlib def zlib_encode( data: bytes | NDArray[Any], /, level: int | None = None, *, out: Any = None, ) -> bytes: """Compress Zlib DEFLATE.""" if isinstance(data, numpy.ndarray): data = data.tobytes() return zlib.compress(data, 6 if level is None else level) def zlib_decode(data: bytes, /, *, out: Any = None) -> bytes: """Decompress Zlib DEFLATE.""" return zlib.decompress(data) except ImportError: # Python was built without zlib def zlib_encode( data: bytes | NDArray[Any], /, level: int | None = None, *, out: Any = None, ) -> bytes: """Raise ImportError.""" import zlib # noqa return b'' def zlib_decode(data: bytes, /, *, out: Any = None) -> bytes: """Raise ImportError.""" import zlib # noqa return b'' def packbits_decode(encoded: bytes, /, *, out: Any = None) -> bytes: r"""Decompress PackBits encoded byte string. >>> packbits_decode(b'\x80\x80') # NOP b'' >>> packbits_decode(b'\x02123') b'123' >>> packbits_decode( ... b'\xfe\xaa\x02\x80\x00\x2a\xfd\xaa\x03\x80\x00\x2a\x22\xf7\xaa' ... )[:-5] b'\xaa\xaa\xaa\x80\x00*\xaa\xaa\xaa\xaa\x80\x00*"\xaa\xaa\xaa\xaa\xaa' """ out = [] out_extend = out.extend i = 0 try: while True: n = ord(encoded[i : i + 1]) + 1 i += 1 if n > 129: # replicate out_extend(encoded[i : i + 1] * (258 - n)) i += 1 elif n < 129: # literal out_extend(encoded[i : i + n]) i += n except TypeError: pass return bytes(out) @overload def delta_encode( data: bytes | bytearray, /, axis: int = -1, dist: int = 1, *, out: Any = None, ) -> bytes: ... @overload def delta_encode( data: NDArray[Any], /, axis: int = -1, dist: int = 1, *, out: Any = None ) -> NDArray[Any]: ... def delta_encode( data: bytes | bytearray | NDArray[Any], /, axis: int = -1, dist: int = 1, *, out: Any = None, ) -> bytes | NDArray[Any]: """Encode Delta.""" if dist != 1: raise NotImplementedError( f"delta_encode with {dist=} requires the 'imagecodecs' package" ) if isinstance(data, (bytes, bytearray)): data = numpy.frombuffer(data, dtype=numpy.uint8) diff = numpy.diff(data, axis=0) return numpy.insert(diff, 0, data[0]).tobytes() dtype = data.dtype if dtype.kind == 'f': data = data.view(f'{dtype.byteorder}u{dtype.itemsize}') diff = numpy.diff(data, axis=axis) key: list[int | slice] = [slice(None)] * data.ndim key[axis] = 0 diff = numpy.insert(diff, 0, data[tuple(key)], axis=axis) if not data.dtype.isnative: diff = diff.byteswap(True) diff = diff.view(diff.dtype.newbyteorder()) if dtype.kind == 'f': return diff.view(dtype) return diff @overload def delta_decode( data: bytes | bytearray, /, axis: int, dist: int, *, out: Any ) -> bytes: ... @overload def delta_decode( data: NDArray[Any], /, axis: int, dist: int, *, out: Any ) -> NDArray[Any]: ... def delta_decode( data: bytes | bytearray | NDArray[Any], /, axis: int = -1, dist: int = 1, *, out: Any = None, ) -> bytes | NDArray[Any]: """Decode Delta.""" if dist != 1: raise NotImplementedError( f"delta_decode with {dist=} requires the 'imagecodecs' package" ) if out is not None and not out.flags.writeable: out = None if isinstance(data, (bytes, bytearray)): data = numpy.frombuffer(data, dtype=numpy.uint8) return numpy.cumsum( # type: ignore[no-any-return] data, axis=0, dtype=numpy.uint8, out=out ).tobytes() if data.dtype.kind == 'f': if not data.dtype.isnative: raise NotImplementedError( f'delta_decode with {data.dtype!r} ' "requires the 'imagecodecs' package" ) view = data.view(f'{data.dtype.byteorder}u{data.dtype.itemsize}') view = numpy.cumsum(view, axis=axis, dtype=view.dtype) return view.view(data.dtype) return numpy.cumsum( # type: ignore[no-any-return] data, axis=axis, dtype=data.dtype, out=out ) @overload def bitorder_decode( data: bytes | bytearray, /, *, out: Any = None, _bitorder: list[Any] = [] ) -> bytes: ... @overload def bitorder_decode( data: NDArray[Any], /, *, out: Any = None, _bitorder: list[Any] = [] ) -> NDArray[Any]: ... def bitorder_decode( data: bytes | bytearray | NDArray[Any], /, *, out: Any = None, _bitorder: list[Any] = [], ) -> bytes | NDArray[Any]: r"""Reverse bits in each byte of bytes or numpy array. Decode data where pixels with lower column values are stored in the lower-order bits of the bytes (TIFF FillOrder is LSB2MSB). Parameters: data: Data to bit-reversed. If bytes type, a new bit-reversed bytes is returned. NumPy arrays are bit-reversed in-place. Examples: >>> bitorder_decode(b'\x01\x64') b'\x80&' >>> data = numpy.array([1, 666], dtype='uint16') >>> bitorder_decode(data) >>> data array([ 128, 16473], dtype=uint16) """ if not _bitorder: _bitorder.append( b'\x00\x80@\xc0 \xa0`\xe0\x10\x90P\xd00\xb0p\xf0\x08\x88H' b'\xc8(\xa8h\xe8\x18\x98X\xd88\xb8x\xf8\x04\x84D\xc4$\xa4d' b'\xe4\x14\x94T\xd44\xb4t\xf4\x0c\x8cL\xcc,\xacl\xec\x1c\x9c' b'\\\xdc<\xbc|\xfc\x02\x82B\xc2"\xa2b\xe2\x12\x92R\xd22' b'\xb2r\xf2\n\x8aJ\xca*\xaaj\xea\x1a\x9aZ\xda:\xbaz\xfa' b'\x06\x86F\xc6&\xa6f\xe6\x16\x96V\xd66\xb6v\xf6\x0e\x8eN' b'\xce.\xaen\xee\x1e\x9e^\xde>\xbe~\xfe\x01\x81A\xc1!\xa1a' b'\xe1\x11\x91Q\xd11\xb1q\xf1\t\x89I\xc9)\xa9i\xe9\x19' b'\x99Y\xd99\xb9y\xf9\x05\x85E\xc5%\xa5e\xe5\x15\x95U\xd55' b'\xb5u\xf5\r\x8dM\xcd-\xadm\xed\x1d\x9d]\xdd=\xbd}\xfd' b'\x03\x83C\xc3#\xa3c\xe3\x13\x93S\xd33\xb3s\xf3\x0b\x8bK' b'\xcb+\xabk\xeb\x1b\x9b[\xdb;\xbb{\xfb\x07\x87G\xc7\'\xa7g' b'\xe7\x17\x97W\xd77\xb7w\xf7\x0f\x8fO\xcf/\xafo\xef\x1f\x9f_' b'\xdf?\xbf\x7f\xff' ) _bitorder.append(numpy.frombuffer(_bitorder[0], dtype=numpy.uint8)) if isinstance(data, (bytes, bytearray)): return data.translate(_bitorder[0]) try: view = data.view('uint8') numpy.take(_bitorder[1], view, out=view) return data except ValueError as exc: raise NotImplementedError( "bitorder_decode of slices requires the 'imagecodecs' package" ) from exc return None # type: ignore[unreachable] def packints_decode( data: bytes, /, dtype: DTypeLike, bitspersample: int, runlen: int = 0, *, out: Any = None, ) -> NDArray[Any]: """Decompress bytes to array of integers. This implementation only handles itemsizes 1, 8, 16, 32, and 64 bits. Install the Imagecodecs package for decoding other integer sizes. Parameters: data: Data to decompress. dtype: Numpy boolean or integer type. bitspersample: Number of bits per integer. runlen: Number of consecutive integers after which to start at next byte. Examples: >>> packints_decode(b'a', 'B', 1) array([0, 1, 1, 0, 0, 0, 0, 1], dtype=uint8) """ if bitspersample == 1: # bitarray data_array = numpy.frombuffer(data, '|B') data_array = numpy.unpackbits(data_array) if runlen % 8: data_array = data_array.reshape(-1, runlen + (8 - runlen % 8)) data_array = data_array[:, :runlen].reshape(-1) return data_array.astype(dtype) if bitspersample in (8, 16, 32, 64): return numpy.frombuffer(data, dtype) raise NotImplementedError( f'packints_decode of {bitspersample}-bit integers ' "requires the 'imagecodecs' package" ) def packints_encode( data: NDArray[Any], /, bitspersample: int, axis: int = -1, *, out: Any = None, ) -> bytes: """Tightly pack integers.""" raise NotImplementedError( "packints_encode requires the 'imagecodecs' package" ) def float24_decode( data: bytes, /, byteorder: Literal['>', '<'] ) -> NDArray[Any]: """Return float32 array from float24.""" raise NotImplementedError( "float24_decode requires the 'imagecodecs' package" ) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1688343459.0 tifffile-2025.3.30/tifffile/geodb.py0000666000000000000000000017054214450411643014005 0ustar00# tifffile/geodb.py """GeoTIFF GeoKey Database. Adapted from http://gis.ess.washington.edu/data/raster/drg/docs/geotiff.txt """ from __future__ import annotations import enum class GeoKeys(enum.IntEnum): """Geo keys.""" GTModelTypeGeoKey = 1024 GTRasterTypeGeoKey = 1025 GTCitationGeoKey = 1026 GeographicTypeGeoKey = 2048 GeogCitationGeoKey = 2049 GeogGeodeticDatumGeoKey = 2050 GeogPrimeMeridianGeoKey = 2051 GeogLinearUnitsGeoKey = 2052 GeogLinearUnitSizeGeoKey = 2053 GeogAngularUnitsGeoKey = 2054 GeogAngularUnitsSizeGeoKey = 2055 GeogEllipsoidGeoKey = 2056 GeogSemiMajorAxisGeoKey = 2057 GeogSemiMinorAxisGeoKey = 2058 GeogInvFlatteningGeoKey = 2059 GeogAzimuthUnitsGeoKey = 2060 GeogPrimeMeridianLongGeoKey = 2061 GeogTOWGS84GeoKey = 2062 ProjLinearUnitsInterpCorrectGeoKey = 3059 # GDAL ProjectedCSTypeGeoKey = 3072 PCSCitationGeoKey = 3073 ProjectionGeoKey = 3074 ProjCoordTransGeoKey = 3075 ProjLinearUnitsGeoKey = 3076 ProjLinearUnitSizeGeoKey = 3077 ProjStdParallel1GeoKey = 3078 ProjStdParallel2GeoKey = 3079 ProjNatOriginLongGeoKey = 3080 ProjNatOriginLatGeoKey = 3081 ProjFalseEastingGeoKey = 3082 ProjFalseNorthingGeoKey = 3083 ProjFalseOriginLongGeoKey = 3084 ProjFalseOriginLatGeoKey = 3085 ProjFalseOriginEastingGeoKey = 3086 ProjFalseOriginNorthingGeoKey = 3087 ProjCenterLongGeoKey = 3088 ProjCenterLatGeoKey = 3089 ProjCenterEastingGeoKey = 3090 ProjCenterNorthingGeoKey = 3091 ProjScaleAtNatOriginGeoKey = 3092 ProjScaleAtCenterGeoKey = 3093 ProjAzimuthAngleGeoKey = 3094 ProjStraightVertPoleLongGeoKey = 3095 ProjRectifiedGridAngleGeoKey = 3096 VerticalCSTypeGeoKey = 4096 VerticalCitationGeoKey = 4097 VerticalDatumGeoKey = 4098 VerticalUnitsGeoKey = 4099 class Proj(enum.IntEnum): """Projection Codes.""" Undefined = 0 User_Defined = 32767 Alabama_CS27_East = 10101 Alabama_CS27_West = 10102 Alabama_CS83_East = 10131 Alabama_CS83_West = 10132 Arizona_Coordinate_System_east = 10201 Arizona_Coordinate_System_Central = 10202 Arizona_Coordinate_System_west = 10203 Arizona_CS83_east = 10231 Arizona_CS83_Central = 10232 Arizona_CS83_west = 10233 Arkansas_CS27_North = 10301 Arkansas_CS27_South = 10302 Arkansas_CS83_North = 10331 Arkansas_CS83_South = 10332 California_CS27_I = 10401 California_CS27_II = 10402 California_CS27_III = 10403 California_CS27_IV = 10404 California_CS27_V = 10405 California_CS27_VI = 10406 California_CS27_VII = 10407 California_CS83_1 = 10431 California_CS83_2 = 10432 California_CS83_3 = 10433 California_CS83_4 = 10434 California_CS83_5 = 10435 California_CS83_6 = 10436 Colorado_CS27_North = 10501 Colorado_CS27_Central = 10502 Colorado_CS27_South = 10503 Colorado_CS83_North = 10531 Colorado_CS83_Central = 10532 Colorado_CS83_South = 10533 Connecticut_CS27 = 10600 Connecticut_CS83 = 10630 Delaware_CS27 = 10700 Delaware_CS83 = 10730 Florida_CS27_East = 10901 Florida_CS27_West = 10902 Florida_CS27_North = 10903 Florida_CS83_East = 10931 Florida_CS83_West = 10932 Florida_CS83_North = 10933 Georgia_CS27_East = 11001 Georgia_CS27_West = 11002 Georgia_CS83_East = 11031 Georgia_CS83_West = 11032 Idaho_CS27_East = 11101 Idaho_CS27_Central = 11102 Idaho_CS27_West = 11103 Idaho_CS83_East = 11131 Idaho_CS83_Central = 11132 Idaho_CS83_West = 11133 Illinois_CS27_East = 11201 Illinois_CS27_West = 11202 Illinois_CS83_East = 11231 Illinois_CS83_West = 11232 Indiana_CS27_East = 11301 Indiana_CS27_West = 11302 Indiana_CS83_East = 11331 Indiana_CS83_West = 11332 Iowa_CS27_North = 11401 Iowa_CS27_South = 11402 Iowa_CS83_North = 11431 Iowa_CS83_South = 11432 Kansas_CS27_North = 11501 Kansas_CS27_South = 11502 Kansas_CS83_North = 11531 Kansas_CS83_South = 11532 Kentucky_CS27_North = 11601 Kentucky_CS27_South = 11602 Kentucky_CS83_North = 15303 Kentucky_CS83_South = 11632 Louisiana_CS27_North = 11701 Louisiana_CS27_South = 11702 Louisiana_CS83_North = 11731 Louisiana_CS83_South = 11732 Maine_CS27_East = 11801 Maine_CS27_West = 11802 Maine_CS83_East = 11831 Maine_CS83_West = 11832 Maryland_CS27 = 11900 Maryland_CS83 = 11930 Massachusetts_CS27_Mainland = 12001 Massachusetts_CS27_Island = 12002 Massachusetts_CS83_Mainland = 12031 Massachusetts_CS83_Island = 12032 Michigan_State_Plane_East = 12101 Michigan_State_Plane_Old_Central = 12102 Michigan_State_Plane_West = 12103 Michigan_CS27_North = 12111 Michigan_CS27_Central = 12112 Michigan_CS27_South = 12113 Michigan_CS83_North = 12141 Michigan_CS83_Central = 12142 Michigan_CS83_South = 12143 Minnesota_CS27_North = 12201 Minnesota_CS27_Central = 12202 Minnesota_CS27_South = 12203 Minnesota_CS83_North = 12231 Minnesota_CS83_Central = 12232 Minnesota_CS83_South = 12233 Mississippi_CS27_East = 12301 Mississippi_CS27_West = 12302 Mississippi_CS83_East = 12331 Mississippi_CS83_West = 12332 Missouri_CS27_East = 12401 Missouri_CS27_Central = 12402 Missouri_CS27_West = 12403 Missouri_CS83_East = 12431 Missouri_CS83_Central = 12432 Missouri_CS83_West = 12433 Montana_CS27_North = 12501 Montana_CS27_Central = 12502 Montana_CS27_South = 12503 Montana_CS83 = 12530 Nebraska_CS27_North = 12601 Nebraska_CS27_South = 12602 Nebraska_CS83 = 12630 Nevada_CS27_East = 12701 Nevada_CS27_Central = 12702 Nevada_CS27_West = 12703 Nevada_CS83_East = 12731 Nevada_CS83_Central = 12732 Nevada_CS83_West = 12733 New_Hampshire_CS27 = 12800 New_Hampshire_CS83 = 12830 New_Jersey_CS27 = 12900 New_Jersey_CS83 = 12930 New_Mexico_CS27_East = 13001 New_Mexico_CS27_Central = 13002 New_Mexico_CS27_West = 13003 New_Mexico_CS83_East = 13031 New_Mexico_CS83_Central = 13032 New_Mexico_CS83_West = 13033 New_York_CS27_East = 13101 New_York_CS27_Central = 13102 New_York_CS27_West = 13103 New_York_CS27_Long_Island = 13104 New_York_CS83_East = 13131 New_York_CS83_Central = 13132 New_York_CS83_West = 13133 New_York_CS83_Long_Island = 13134 North_Carolina_CS27 = 13200 North_Carolina_CS83 = 13230 North_Dakota_CS27_North = 13301 North_Dakota_CS27_South = 13302 North_Dakota_CS83_North = 13331 North_Dakota_CS83_South = 13332 Ohio_CS27_North = 13401 Ohio_CS27_South = 13402 Ohio_CS83_North = 13431 Ohio_CS83_South = 13432 Oklahoma_CS27_North = 13501 Oklahoma_CS27_South = 13502 Oklahoma_CS83_North = 13531 Oklahoma_CS83_South = 13532 Oregon_CS27_North = 13601 Oregon_CS27_South = 13602 Oregon_CS83_North = 13631 Oregon_CS83_South = 13632 Pennsylvania_CS27_North = 13701 Pennsylvania_CS27_South = 13702 Pennsylvania_CS83_North = 13731 Pennsylvania_CS83_South = 13732 Rhode_Island_CS27 = 13800 Rhode_Island_CS83 = 13830 South_Carolina_CS27_North = 13901 South_Carolina_CS27_South = 13902 South_Carolina_CS83 = 13930 South_Dakota_CS27_North = 14001 South_Dakota_CS27_South = 14002 South_Dakota_CS83_North = 14031 South_Dakota_CS83_South = 14032 Tennessee_CS27 = 15302 Tennessee_CS83 = 14130 Texas_CS27_North = 14201 Texas_CS27_North_Central = 14202 Texas_CS27_Central = 14203 Texas_CS27_South_Central = 14204 Texas_CS27_South = 14205 Texas_CS83_North = 14231 Texas_CS83_North_Central = 14232 Texas_CS83_Central = 14233 Texas_CS83_South_Central = 14234 Texas_CS83_South = 14235 Utah_CS27_North = 14301 Utah_CS27_Central = 14302 Utah_CS27_South = 14303 Utah_CS83_North = 14331 Utah_CS83_Central = 14332 Utah_CS83_South = 14333 Vermont_CS27 = 14400 Vermont_CS83 = 14430 Virginia_CS27_North = 14501 Virginia_CS27_South = 14502 Virginia_CS83_North = 14531 Virginia_CS83_South = 14532 Washington_CS27_North = 14601 Washington_CS27_South = 14602 Washington_CS83_North = 14631 Washington_CS83_South = 14632 West_Virginia_CS27_North = 14701 West_Virginia_CS27_South = 14702 West_Virginia_CS83_North = 14731 West_Virginia_CS83_South = 14732 Wisconsin_CS27_North = 14801 Wisconsin_CS27_Central = 14802 Wisconsin_CS27_South = 14803 Wisconsin_CS83_North = 14831 Wisconsin_CS83_Central = 14832 Wisconsin_CS83_South = 14833 Wyoming_CS27_East = 14901 Wyoming_CS27_East_Central = 14902 Wyoming_CS27_West_Central = 14903 Wyoming_CS27_West = 14904 Wyoming_CS83_East = 14931 Wyoming_CS83_East_Central = 14932 Wyoming_CS83_West_Central = 14933 Wyoming_CS83_West = 14934 Alaska_CS27_1 = 15001 Alaska_CS27_2 = 15002 Alaska_CS27_3 = 15003 Alaska_CS27_4 = 15004 Alaska_CS27_5 = 15005 Alaska_CS27_6 = 15006 Alaska_CS27_7 = 15007 Alaska_CS27_8 = 15008 Alaska_CS27_9 = 15009 Alaska_CS27_10 = 15010 Alaska_CS83_1 = 15031 Alaska_CS83_2 = 15032 Alaska_CS83_3 = 15033 Alaska_CS83_4 = 15034 Alaska_CS83_5 = 15035 Alaska_CS83_6 = 15036 Alaska_CS83_7 = 15037 Alaska_CS83_8 = 15038 Alaska_CS83_9 = 15039 Alaska_CS83_10 = 15040 Hawaii_CS27_1 = 15101 Hawaii_CS27_2 = 15102 Hawaii_CS27_3 = 15103 Hawaii_CS27_4 = 15104 Hawaii_CS27_5 = 15105 Hawaii_CS83_1 = 15131 Hawaii_CS83_2 = 15132 Hawaii_CS83_3 = 15133 Hawaii_CS83_4 = 15134 Hawaii_CS83_5 = 15135 Puerto_Rico_CS27 = 15201 St_Croix = 15202 Puerto_Rico_Virgin_Is = 15230 BLM_14N_feet = 15914 BLM_15N_feet = 15915 BLM_16N_feet = 15916 BLM_17N_feet = 15917 UTM_zone_1N = 16001 UTM_zone_2N = 16002 UTM_zone_3N = 16003 UTM_zone_4N = 16004 UTM_zone_5N = 16005 UTM_zone_6N = 16006 UTM_zone_7N = 16007 UTM_zone_8N = 16008 UTM_zone_9N = 16009 UTM_zone_10N = 16010 UTM_zone_11N = 16011 UTM_zone_12N = 16012 UTM_zone_13N = 16013 UTM_zone_14N = 16014 UTM_zone_15N = 16015 UTM_zone_16N = 16016 UTM_zone_17N = 16017 UTM_zone_18N = 16018 UTM_zone_19N = 16019 UTM_zone_20N = 16020 UTM_zone_21N = 16021 UTM_zone_22N = 16022 UTM_zone_23N = 16023 UTM_zone_24N = 16024 UTM_zone_25N = 16025 UTM_zone_26N = 16026 UTM_zone_27N = 16027 UTM_zone_28N = 16028 UTM_zone_29N = 16029 UTM_zone_30N = 16030 UTM_zone_31N = 16031 UTM_zone_32N = 16032 UTM_zone_33N = 16033 UTM_zone_34N = 16034 UTM_zone_35N = 16035 UTM_zone_36N = 16036 UTM_zone_37N = 16037 UTM_zone_38N = 16038 UTM_zone_39N = 16039 UTM_zone_40N = 16040 UTM_zone_41N = 16041 UTM_zone_42N = 16042 UTM_zone_43N = 16043 UTM_zone_44N = 16044 UTM_zone_45N = 16045 UTM_zone_46N = 16046 UTM_zone_47N = 16047 UTM_zone_48N = 16048 UTM_zone_49N = 16049 UTM_zone_50N = 16050 UTM_zone_51N = 16051 UTM_zone_52N = 16052 UTM_zone_53N = 16053 UTM_zone_54N = 16054 UTM_zone_55N = 16055 UTM_zone_56N = 16056 UTM_zone_57N = 16057 UTM_zone_58N = 16058 UTM_zone_59N = 16059 UTM_zone_60N = 16060 UTM_zone_1S = 16101 UTM_zone_2S = 16102 UTM_zone_3S = 16103 UTM_zone_4S = 16104 UTM_zone_5S = 16105 UTM_zone_6S = 16106 UTM_zone_7S = 16107 UTM_zone_8S = 16108 UTM_zone_9S = 16109 UTM_zone_10S = 16110 UTM_zone_11S = 16111 UTM_zone_12S = 16112 UTM_zone_13S = 16113 UTM_zone_14S = 16114 UTM_zone_15S = 16115 UTM_zone_16S = 16116 UTM_zone_17S = 16117 UTM_zone_18S = 16118 UTM_zone_19S = 16119 UTM_zone_20S = 16120 UTM_zone_21S = 16121 UTM_zone_22S = 16122 UTM_zone_23S = 16123 UTM_zone_24S = 16124 UTM_zone_25S = 16125 UTM_zone_26S = 16126 UTM_zone_27S = 16127 UTM_zone_28S = 16128 UTM_zone_29S = 16129 UTM_zone_30S = 16130 UTM_zone_31S = 16131 UTM_zone_32S = 16132 UTM_zone_33S = 16133 UTM_zone_34S = 16134 UTM_zone_35S = 16135 UTM_zone_36S = 16136 UTM_zone_37S = 16137 UTM_zone_38S = 16138 UTM_zone_39S = 16139 UTM_zone_40S = 16140 UTM_zone_41S = 16141 UTM_zone_42S = 16142 UTM_zone_43S = 16143 UTM_zone_44S = 16144 UTM_zone_45S = 16145 UTM_zone_46S = 16146 UTM_zone_47S = 16147 UTM_zone_48S = 16148 UTM_zone_49S = 16149 UTM_zone_50S = 16150 UTM_zone_51S = 16151 UTM_zone_52S = 16152 UTM_zone_53S = 16153 UTM_zone_54S = 16154 UTM_zone_55S = 16155 UTM_zone_56S = 16156 UTM_zone_57S = 16157 UTM_zone_58S = 16158 UTM_zone_59S = 16159 UTM_zone_60S = 16160 Gauss_Kruger_zone_0 = 16200 Gauss_Kruger_zone_1 = 16201 Gauss_Kruger_zone_2 = 16202 Gauss_Kruger_zone_3 = 16203 Gauss_Kruger_zone_4 = 16204 Gauss_Kruger_zone_5 = 16205 Map_Grid_of_Australia_48 = 17348 Map_Grid_of_Australia_49 = 17349 Map_Grid_of_Australia_50 = 17350 Map_Grid_of_Australia_51 = 17351 Map_Grid_of_Australia_52 = 17352 Map_Grid_of_Australia_53 = 17353 Map_Grid_of_Australia_54 = 17354 Map_Grid_of_Australia_55 = 17355 Map_Grid_of_Australia_56 = 17356 Map_Grid_of_Australia_57 = 17357 Map_Grid_of_Australia_58 = 17358 Australian_Map_Grid_48 = 17448 Australian_Map_Grid_49 = 17449 Australian_Map_Grid_50 = 17450 Australian_Map_Grid_51 = 17451 Australian_Map_Grid_52 = 17452 Australian_Map_Grid_53 = 17453 Australian_Map_Grid_54 = 17454 Australian_Map_Grid_55 = 17455 Australian_Map_Grid_56 = 17456 Australian_Map_Grid_57 = 17457 Australian_Map_Grid_58 = 17458 Argentina_1 = 18031 Argentina_2 = 18032 Argentina_3 = 18033 Argentina_4 = 18034 Argentina_5 = 18035 Argentina_6 = 18036 Argentina_7 = 18037 Colombia_3W = 18051 Colombia_Bogota = 18052 Colombia_3E = 18053 Colombia_6E = 18054 Egypt_Red_Belt = 18072 Egypt_Purple_Belt = 18073 Extended_Purple_Belt = 18074 New_Zealand_North_Island_Nat_Grid = 18141 New_Zealand_South_Island_Nat_Grid = 18142 Bahrain_Grid = 19900 Netherlands_E_Indies_Equatorial = 19905 RSO_Borneo = 19912 Stereo_70 = 19926 class PCS(enum.IntEnum): """Projected CS Type Codes.""" Undefined = 0 User_Defined = 32767 Adindan_UTM_zone_37N = 20137 Adindan_UTM_zone_38N = 20138 AGD66_AMG_zone_48 = 20248 AGD66_AMG_zone_49 = 20249 AGD66_AMG_zone_50 = 20250 AGD66_AMG_zone_51 = 20251 AGD66_AMG_zone_52 = 20252 AGD66_AMG_zone_53 = 20253 AGD66_AMG_zone_54 = 20254 AGD66_AMG_zone_55 = 20255 AGD66_AMG_zone_56 = 20256 AGD66_AMG_zone_57 = 20257 AGD66_AMG_zone_58 = 20258 AGD84_AMG_zone_48 = 20348 AGD84_AMG_zone_49 = 20349 AGD84_AMG_zone_50 = 20350 AGD84_AMG_zone_51 = 20351 AGD84_AMG_zone_52 = 20352 AGD84_AMG_zone_53 = 20353 AGD84_AMG_zone_54 = 20354 AGD84_AMG_zone_55 = 20355 AGD84_AMG_zone_56 = 20356 AGD84_AMG_zone_57 = 20357 AGD84_AMG_zone_58 = 20358 Ain_el_Abd_UTM_zone_37N = 20437 Ain_el_Abd_UTM_zone_38N = 20438 Ain_el_Abd_UTM_zone_39N = 20439 Ain_el_Abd_Bahrain_Grid = 20499 Afgooye_UTM_zone_38N = 20538 Afgooye_UTM_zone_39N = 20539 Lisbon_Portugese_Grid = 20700 Aratu_UTM_zone_22S = 20822 Aratu_UTM_zone_23S = 20823 Aratu_UTM_zone_24S = 20824 Arc_1950_Lo13 = 20973 Arc_1950_Lo15 = 20975 Arc_1950_Lo17 = 20977 Arc_1950_Lo19 = 20979 Arc_1950_Lo21 = 20981 Arc_1950_Lo23 = 20983 Arc_1950_Lo25 = 20985 Arc_1950_Lo27 = 20987 Arc_1950_Lo29 = 20989 Arc_1950_Lo31 = 20991 Arc_1950_Lo33 = 20993 Arc_1950_Lo35 = 20995 Batavia_NEIEZ = 21100 Batavia_UTM_zone_48S = 21148 Batavia_UTM_zone_49S = 21149 Batavia_UTM_zone_50S = 21150 Beijing_Gauss_zone_13 = 21413 Beijing_Gauss_zone_14 = 21414 Beijing_Gauss_zone_15 = 21415 Beijing_Gauss_zone_16 = 21416 Beijing_Gauss_zone_17 = 21417 Beijing_Gauss_zone_18 = 21418 Beijing_Gauss_zone_19 = 21419 Beijing_Gauss_zone_20 = 21420 Beijing_Gauss_zone_21 = 21421 Beijing_Gauss_zone_22 = 21422 Beijing_Gauss_zone_23 = 21423 Beijing_Gauss_13N = 21473 Beijing_Gauss_14N = 21474 Beijing_Gauss_15N = 21475 Beijing_Gauss_16N = 21476 Beijing_Gauss_17N = 21477 Beijing_Gauss_18N = 21478 Beijing_Gauss_19N = 21479 Beijing_Gauss_20N = 21480 Beijing_Gauss_21N = 21481 Beijing_Gauss_22N = 21482 Beijing_Gauss_23N = 21483 Belge_Lambert_50 = 21500 Bern_1898_Swiss_Old = 21790 Bogota_UTM_zone_17N = 21817 Bogota_UTM_zone_18N = 21818 Bogota_Colombia_3W = 21891 Bogota_Colombia_Bogota = 21892 Bogota_Colombia_3E = 21893 Bogota_Colombia_6E = 21894 Camacupa_UTM_32S = 22032 Camacupa_UTM_33S = 22033 C_Inchauspe_Argentina_1 = 22191 C_Inchauspe_Argentina_2 = 22192 C_Inchauspe_Argentina_3 = 22193 C_Inchauspe_Argentina_4 = 22194 C_Inchauspe_Argentina_5 = 22195 C_Inchauspe_Argentina_6 = 22196 C_Inchauspe_Argentina_7 = 22197 Carthage_UTM_zone_32N = 22332 Carthage_Nord_Tunisie = 22391 Carthage_Sud_Tunisie = 22392 Corrego_Alegre_UTM_23S = 22523 Corrego_Alegre_UTM_24S = 22524 Douala_UTM_zone_32N = 22832 Egypt_1907_Red_Belt = 22992 Egypt_1907_Purple_Belt = 22993 Egypt_1907_Ext_Purple = 22994 ED50_UTM_zone_28N = 23028 ED50_UTM_zone_29N = 23029 ED50_UTM_zone_30N = 23030 ED50_UTM_zone_31N = 23031 ED50_UTM_zone_32N = 23032 ED50_UTM_zone_33N = 23033 ED50_UTM_zone_34N = 23034 ED50_UTM_zone_35N = 23035 ED50_UTM_zone_36N = 23036 ED50_UTM_zone_37N = 23037 ED50_UTM_zone_38N = 23038 Fahud_UTM_zone_39N = 23239 Fahud_UTM_zone_40N = 23240 Garoua_UTM_zone_33N = 23433 ID74_UTM_zone_46N = 23846 ID74_UTM_zone_47N = 23847 ID74_UTM_zone_48N = 23848 ID74_UTM_zone_49N = 23849 ID74_UTM_zone_50N = 23850 ID74_UTM_zone_51N = 23851 ID74_UTM_zone_52N = 23852 ID74_UTM_zone_53N = 23853 ID74_UTM_zone_46S = 23886 ID74_UTM_zone_47S = 23887 ID74_UTM_zone_48S = 23888 ID74_UTM_zone_49S = 23889 ID74_UTM_zone_50S = 23890 ID74_UTM_zone_51S = 23891 ID74_UTM_zone_52S = 23892 ID74_UTM_zone_53S = 23893 ID74_UTM_zone_54S = 23894 Indian_1954_UTM_47N = 23947 Indian_1954_UTM_48N = 23948 Indian_1975_UTM_47N = 24047 Indian_1975_UTM_48N = 24048 Jamaica_1875_Old_Grid = 24100 JAD69_Jamaica_Grid = 24200 Kalianpur_India_0 = 24370 Kalianpur_India_I = 24371 Kalianpur_India_IIa = 24372 Kalianpur_India_IIIa = 24373 Kalianpur_India_IVa = 24374 Kalianpur_India_IIb = 24382 Kalianpur_India_IIIb = 24383 Kalianpur_India_IVb = 24384 Kertau_Singapore_Grid = 24500 Kertau_UTM_zone_47N = 24547 Kertau_UTM_zone_48N = 24548 La_Canoa_UTM_zone_20N = 24720 La_Canoa_UTM_zone_21N = 24721 PSAD56_UTM_zone_18N = 24818 PSAD56_UTM_zone_19N = 24819 PSAD56_UTM_zone_20N = 24820 PSAD56_UTM_zone_21N = 24821 PSAD56_UTM_zone_17S = 24877 PSAD56_UTM_zone_18S = 24878 PSAD56_UTM_zone_19S = 24879 PSAD56_UTM_zone_20S = 24880 PSAD56_Peru_west_zone = 24891 PSAD56_Peru_central = 24892 PSAD56_Peru_east_zone = 24893 Leigon_Ghana_Grid = 25000 Lome_UTM_zone_31N = 25231 Luzon_Philippines_I = 25391 Luzon_Philippines_II = 25392 Luzon_Philippines_III = 25393 Luzon_Philippines_IV = 25394 Luzon_Philippines_V = 25395 Makassar_NEIEZ = 25700 Malongo_1987_UTM_32S = 25932 Merchich_Nord_Maroc = 26191 Merchich_Sud_Maroc = 26192 Merchich_Sahara = 26193 Massawa_UTM_zone_37N = 26237 Minna_UTM_zone_31N = 26331 Minna_UTM_zone_32N = 26332 Minna_Nigeria_West = 26391 Minna_Nigeria_Mid_Belt = 26392 Minna_Nigeria_East = 26393 Mhast_UTM_zone_32S = 26432 Monte_Mario_Italy_1 = 26591 Monte_Mario_Italy_2 = 26592 M_poraloko_UTM_32N = 26632 M_poraloko_UTM_32S = 26692 NAD27_UTM_zone_3N = 26703 NAD27_UTM_zone_4N = 26704 NAD27_UTM_zone_5N = 26705 NAD27_UTM_zone_6N = 26706 NAD27_UTM_zone_7N = 26707 NAD27_UTM_zone_8N = 26708 NAD27_UTM_zone_9N = 26709 NAD27_UTM_zone_10N = 26710 NAD27_UTM_zone_11N = 26711 NAD27_UTM_zone_12N = 26712 NAD27_UTM_zone_13N = 26713 NAD27_UTM_zone_14N = 26714 NAD27_UTM_zone_15N = 26715 NAD27_UTM_zone_16N = 26716 NAD27_UTM_zone_17N = 26717 NAD27_UTM_zone_18N = 26718 NAD27_UTM_zone_19N = 26719 NAD27_UTM_zone_20N = 26720 NAD27_UTM_zone_21N = 26721 NAD27_UTM_zone_22N = 26722 NAD27_Alabama_East = 26729 NAD27_Alabama_West = 26730 NAD27_Alaska_zone_1 = 26731 NAD27_Alaska_zone_2 = 26732 NAD27_Alaska_zone_3 = 26733 NAD27_Alaska_zone_4 = 26734 NAD27_Alaska_zone_5 = 26735 NAD27_Alaska_zone_6 = 26736 NAD27_Alaska_zone_7 = 26737 NAD27_Alaska_zone_8 = 26738 NAD27_Alaska_zone_9 = 26739 NAD27_Alaska_zone_10 = 26740 NAD27_California_I = 26741 NAD27_California_II = 26742 NAD27_California_III = 26743 NAD27_California_IV = 26744 NAD27_California_V = 26745 NAD27_California_VI = 26746 NAD27_California_VII = 26747 NAD27_Arizona_East = 26748 NAD27_Arizona_Central = 26749 NAD27_Arizona_West = 26750 NAD27_Arkansas_North = 26751 NAD27_Arkansas_South = 26752 NAD27_Colorado_North = 26753 NAD27_Colorado_Central = 26754 NAD27_Colorado_South = 26755 NAD27_Connecticut = 26756 NAD27_Delaware = 26757 NAD27_Florida_East = 26758 NAD27_Florida_West = 26759 NAD27_Florida_North = 26760 NAD27_Hawaii_zone_1 = 26761 NAD27_Hawaii_zone_2 = 26762 NAD27_Hawaii_zone_3 = 26763 NAD27_Hawaii_zone_4 = 26764 NAD27_Hawaii_zone_5 = 26765 NAD27_Georgia_East = 26766 NAD27_Georgia_West = 26767 NAD27_Idaho_East = 26768 NAD27_Idaho_Central = 26769 NAD27_Idaho_West = 26770 NAD27_Illinois_East = 26771 NAD27_Illinois_West = 26772 NAD27_Indiana_East = 26773 NAD27_BLM_14N_feet = 26774 NAD27_Indiana_West = 26774 NAD27_BLM_15N_feet = 26775 NAD27_Iowa_North = 26775 NAD27_BLM_16N_feet = 26776 NAD27_Iowa_South = 26776 NAD27_BLM_17N_feet = 26777 NAD27_Kansas_North = 26777 NAD27_Kansas_South = 26778 NAD27_Kentucky_North = 26779 NAD27_Kentucky_South = 26780 NAD27_Louisiana_North = 26781 NAD27_Louisiana_South = 26782 NAD27_Maine_East = 26783 NAD27_Maine_West = 26784 NAD27_Maryland = 26785 NAD27_Massachusetts = 26786 NAD27_Massachusetts_Is = 26787 NAD27_Michigan_North = 26788 NAD27_Michigan_Central = 26789 NAD27_Michigan_South = 26790 NAD27_Minnesota_North = 26791 NAD27_Minnesota_Cent = 26792 NAD27_Minnesota_South = 26793 NAD27_Mississippi_East = 26794 NAD27_Mississippi_West = 26795 NAD27_Missouri_East = 26796 NAD27_Missouri_Central = 26797 NAD27_Missouri_West = 26798 NAD_Michigan_Michigan_East = 26801 NAD_Michigan_Michigan_Old_Central = 26802 NAD_Michigan_Michigan_West = 26803 NAD83_UTM_zone_3N = 26903 NAD83_UTM_zone_4N = 26904 NAD83_UTM_zone_5N = 26905 NAD83_UTM_zone_6N = 26906 NAD83_UTM_zone_7N = 26907 NAD83_UTM_zone_8N = 26908 NAD83_UTM_zone_9N = 26909 NAD83_UTM_zone_10N = 26910 NAD83_UTM_zone_11N = 26911 NAD83_UTM_zone_12N = 26912 NAD83_UTM_zone_13N = 26913 NAD83_UTM_zone_14N = 26914 NAD83_UTM_zone_15N = 26915 NAD83_UTM_zone_16N = 26916 NAD83_UTM_zone_17N = 26917 NAD83_UTM_zone_18N = 26918 NAD83_UTM_zone_19N = 26919 NAD83_UTM_zone_20N = 26920 NAD83_UTM_zone_21N = 26921 NAD83_UTM_zone_22N = 26922 NAD83_UTM_zone_23N = 26923 NAD83_Alabama_East = 26929 NAD83_Alabama_West = 26930 NAD83_Alaska_zone_1 = 26931 NAD83_Alaska_zone_2 = 26932 NAD83_Alaska_zone_3 = 26933 NAD83_Alaska_zone_4 = 26934 NAD83_Alaska_zone_5 = 26935 NAD83_Alaska_zone_6 = 26936 NAD83_Alaska_zone_7 = 26937 NAD83_Alaska_zone_8 = 26938 NAD83_Alaska_zone_9 = 26939 NAD83_Alaska_zone_10 = 26940 NAD83_California_1 = 26941 NAD83_California_2 = 26942 NAD83_California_3 = 26943 NAD83_California_4 = 26944 NAD83_California_5 = 26945 NAD83_California_6 = 26946 NAD83_Arizona_East = 26948 NAD83_Arizona_Central = 26949 NAD83_Arizona_West = 26950 NAD83_Arkansas_North = 26951 NAD83_Arkansas_South = 26952 NAD83_Colorado_North = 26953 NAD83_Colorado_Central = 26954 NAD83_Colorado_South = 26955 NAD83_Connecticut = 26956 NAD83_Delaware = 26957 NAD83_Florida_East = 26958 NAD83_Florida_West = 26959 NAD83_Florida_North = 26960 NAD83_Hawaii_zone_1 = 26961 NAD83_Hawaii_zone_2 = 26962 NAD83_Hawaii_zone_3 = 26963 NAD83_Hawaii_zone_4 = 26964 NAD83_Hawaii_zone_5 = 26965 NAD83_Georgia_East = 26966 NAD83_Georgia_West = 26967 NAD83_Idaho_East = 26968 NAD83_Idaho_Central = 26969 NAD83_Idaho_West = 26970 NAD83_Illinois_East = 26971 NAD83_Illinois_West = 26972 NAD83_Indiana_East = 26973 NAD83_Indiana_West = 26974 NAD83_Iowa_North = 26975 NAD83_Iowa_South = 26976 NAD83_Kansas_North = 26977 NAD83_Kansas_South = 26978 NAD83_Kentucky_North = 2205 NAD83_Kentucky_South = 26980 NAD83_Louisiana_North = 26981 NAD83_Louisiana_South = 26982 NAD83_Maine_East = 26983 NAD83_Maine_West = 26984 NAD83_Maryland = 26985 NAD83_Massachusetts = 26986 NAD83_Massachusetts_Is = 26987 NAD83_Michigan_North = 26988 NAD83_Michigan_Central = 26989 NAD83_Michigan_South = 26990 NAD83_Minnesota_North = 26991 NAD83_Minnesota_Cent = 26992 NAD83_Minnesota_South = 26993 NAD83_Mississippi_East = 26994 NAD83_Mississippi_West = 26995 NAD83_Missouri_East = 26996 NAD83_Missouri_Central = 26997 NAD83_Missouri_West = 26998 Nahrwan_1967_UTM_38N = 27038 Nahrwan_1967_UTM_39N = 27039 Nahrwan_1967_UTM_40N = 27040 Naparima_UTM_20N = 27120 GD49_NZ_Map_Grid = 27200 GD49_North_Island_Grid = 27291 GD49_South_Island_Grid = 27292 Datum_73_UTM_zone_29N = 27429 ATF_Nord_de_Guerre = 27500 NTF_France_I = 27581 NTF_France_II = 27582 NTF_France_III = 27583 NTF_Nord_France = 27591 NTF_Centre_France = 27592 NTF_Sud_France = 27593 British_National_Grid = 27700 Point_Noire_UTM_32S = 28232 GDA94_MGA_zone_48 = 28348 GDA94_MGA_zone_49 = 28349 GDA94_MGA_zone_50 = 28350 GDA94_MGA_zone_51 = 28351 GDA94_MGA_zone_52 = 28352 GDA94_MGA_zone_53 = 28353 GDA94_MGA_zone_54 = 28354 GDA94_MGA_zone_55 = 28355 GDA94_MGA_zone_56 = 28356 GDA94_MGA_zone_57 = 28357 GDA94_MGA_zone_58 = 28358 Pulkovo_Gauss_zone_4 = 28404 Pulkovo_Gauss_zone_5 = 28405 Pulkovo_Gauss_zone_6 = 28406 Pulkovo_Gauss_zone_7 = 28407 Pulkovo_Gauss_zone_8 = 28408 Pulkovo_Gauss_zone_9 = 28409 Pulkovo_Gauss_zone_10 = 28410 Pulkovo_Gauss_zone_11 = 28411 Pulkovo_Gauss_zone_12 = 28412 Pulkovo_Gauss_zone_13 = 28413 Pulkovo_Gauss_zone_14 = 28414 Pulkovo_Gauss_zone_15 = 28415 Pulkovo_Gauss_zone_16 = 28416 Pulkovo_Gauss_zone_17 = 28417 Pulkovo_Gauss_zone_18 = 28418 Pulkovo_Gauss_zone_19 = 28419 Pulkovo_Gauss_zone_20 = 28420 Pulkovo_Gauss_zone_21 = 28421 Pulkovo_Gauss_zone_22 = 28422 Pulkovo_Gauss_zone_23 = 28423 Pulkovo_Gauss_zone_24 = 28424 Pulkovo_Gauss_zone_25 = 28425 Pulkovo_Gauss_zone_26 = 28426 Pulkovo_Gauss_zone_27 = 28427 Pulkovo_Gauss_zone_28 = 28428 Pulkovo_Gauss_zone_29 = 28429 Pulkovo_Gauss_zone_30 = 28430 Pulkovo_Gauss_zone_31 = 28431 Pulkovo_Gauss_zone_32 = 28432 Pulkovo_Gauss_4N = 28464 Pulkovo_Gauss_5N = 28465 Pulkovo_Gauss_6N = 28466 Pulkovo_Gauss_7N = 28467 Pulkovo_Gauss_8N = 28468 Pulkovo_Gauss_9N = 28469 Pulkovo_Gauss_10N = 28470 Pulkovo_Gauss_11N = 28471 Pulkovo_Gauss_12N = 28472 Pulkovo_Gauss_13N = 28473 Pulkovo_Gauss_14N = 28474 Pulkovo_Gauss_15N = 28475 Pulkovo_Gauss_16N = 28476 Pulkovo_Gauss_17N = 28477 Pulkovo_Gauss_18N = 28478 Pulkovo_Gauss_19N = 28479 Pulkovo_Gauss_20N = 28480 Pulkovo_Gauss_21N = 28481 Pulkovo_Gauss_22N = 28482 Pulkovo_Gauss_23N = 28483 Pulkovo_Gauss_24N = 28484 Pulkovo_Gauss_25N = 28485 Pulkovo_Gauss_26N = 28486 Pulkovo_Gauss_27N = 28487 Pulkovo_Gauss_28N = 28488 Pulkovo_Gauss_29N = 28489 Pulkovo_Gauss_30N = 28490 Pulkovo_Gauss_31N = 28491 Pulkovo_Gauss_32N = 28492 Qatar_National_Grid = 28600 RD_Netherlands_Old = 28991 RD_Netherlands_New = 28992 SAD69_UTM_zone_18N = 29118 SAD69_UTM_zone_19N = 29119 SAD69_UTM_zone_20N = 29120 SAD69_UTM_zone_21N = 29121 SAD69_UTM_zone_22N = 29122 SAD69_UTM_zone_17S = 29177 SAD69_UTM_zone_18S = 29178 SAD69_UTM_zone_19S = 29179 SAD69_UTM_zone_20S = 29180 SAD69_UTM_zone_21S = 29181 SAD69_UTM_zone_22S = 29182 SAD69_UTM_zone_23S = 29183 SAD69_UTM_zone_24S = 29184 SAD69_UTM_zone_25S = 29185 Sapper_Hill_UTM_20S = 29220 Sapper_Hill_UTM_21S = 29221 Schwarzeck_UTM_33S = 29333 Sudan_UTM_zone_35N = 29635 Sudan_UTM_zone_36N = 29636 Tananarive_Laborde = 29700 Tananarive_UTM_38S = 29738 Tananarive_UTM_39S = 29739 Timbalai_1948_Borneo = 29800 Timbalai_1948_UTM_49N = 29849 Timbalai_1948_UTM_50N = 29850 TM65_Irish_Nat_Grid = 29900 Trinidad_1903_Trinidad = 30200 TC_1948_UTM_zone_39N = 30339 TC_1948_UTM_zone_40N = 30340 Voirol_N_Algerie_ancien = 30491 Voirol_S_Algerie_ancien = 30492 Voirol_Unifie_N_Algerie = 30591 Voirol_Unifie_S_Algerie = 30592 Bern_1938_Swiss_New = 30600 Nord_Sahara_UTM_29N = 30729 Nord_Sahara_UTM_30N = 30730 Nord_Sahara_UTM_31N = 30731 Nord_Sahara_UTM_32N = 30732 Yoff_UTM_zone_28N = 31028 Zanderij_UTM_zone_21N = 31121 MGI_Austria_West = 31291 MGI_Austria_Central = 31292 MGI_Austria_East = 31293 Belge_Lambert_72 = 31300 DHDN_Germany_zone_1 = 31491 DHDN_Germany_zone_2 = 31492 DHDN_Germany_zone_3 = 31493 DHDN_Germany_zone_4 = 31494 DHDN_Germany_zone_5 = 31495 NAD27_Montana_North = 32001 NAD27_Montana_Central = 32002 NAD27_Montana_South = 32003 NAD27_Nebraska_North = 32005 NAD27_Nebraska_South = 32006 NAD27_Nevada_East = 32007 NAD27_Nevada_Central = 32008 NAD27_Nevada_West = 32009 NAD27_New_Hampshire = 32010 NAD27_New_Jersey = 32011 NAD27_New_Mexico_East = 32012 NAD27_New_Mexico_Cent = 32013 NAD27_New_Mexico_West = 32014 NAD27_New_York_East = 32015 NAD27_New_York_Central = 32016 NAD27_New_York_West = 32017 NAD27_New_York_Long_Is = 32018 NAD27_North_Carolina = 32019 NAD27_North_Dakota_N = 32020 NAD27_North_Dakota_S = 32021 NAD27_Ohio_North = 32022 NAD27_Ohio_South = 32023 NAD27_Oklahoma_North = 32024 NAD27_Oklahoma_South = 32025 NAD27_Oregon_North = 32026 NAD27_Oregon_South = 32027 NAD27_Pennsylvania_N = 32028 NAD27_Pennsylvania_S = 32029 NAD27_Rhode_Island = 32030 NAD27_South_Carolina_N = 32031 NAD27_South_Carolina_S = 32033 NAD27_South_Dakota_N = 32034 NAD27_South_Dakota_S = 32035 NAD27_Tennessee = 2204 NAD27_Texas_North = 32037 NAD27_Texas_North_Cen = 32038 NAD27_Texas_Central = 32039 NAD27_Texas_South_Cen = 32040 NAD27_Texas_South = 32041 NAD27_Utah_North = 32042 NAD27_Utah_Central = 32043 NAD27_Utah_South = 32044 NAD27_Vermont = 32045 NAD27_Virginia_North = 32046 NAD27_Virginia_South = 32047 NAD27_Washington_North = 32048 NAD27_Washington_South = 32049 NAD27_West_Virginia_N = 32050 NAD27_West_Virginia_S = 32051 NAD27_Wisconsin_North = 32052 NAD27_Wisconsin_Cen = 32053 NAD27_Wisconsin_South = 32054 NAD27_Wyoming_East = 32055 NAD27_Wyoming_E_Cen = 32056 NAD27_Wyoming_W_Cen = 32057 NAD27_Wyoming_West = 32058 NAD27_Puerto_Rico = 32059 NAD27_St_Croix = 32060 NAD83_Montana = 32100 NAD83_Nebraska = 32104 NAD83_Nevada_East = 32107 NAD83_Nevada_Central = 32108 NAD83_Nevada_West = 32109 NAD83_New_Hampshire = 32110 NAD83_New_Jersey = 32111 NAD83_New_Mexico_East = 32112 NAD83_New_Mexico_Cent = 32113 NAD83_New_Mexico_West = 32114 NAD83_New_York_East = 32115 NAD83_New_York_Central = 32116 NAD83_New_York_West = 32117 NAD83_New_York_Long_Is = 32118 NAD83_North_Carolina = 32119 NAD83_North_Dakota_N = 32120 NAD83_North_Dakota_S = 32121 NAD83_Ohio_North = 32122 NAD83_Ohio_South = 32123 NAD83_Oklahoma_North = 32124 NAD83_Oklahoma_South = 32125 NAD83_Oregon_North = 32126 NAD83_Oregon_South = 32127 NAD83_Pennsylvania_N = 32128 NAD83_Pennsylvania_S = 32129 NAD83_Rhode_Island = 32130 NAD83_South_Carolina = 32133 NAD83_South_Dakota_N = 32134 NAD83_South_Dakota_S = 32135 NAD83_Tennessee = 32136 NAD83_Texas_North = 32137 NAD83_Texas_North_Cen = 32138 NAD83_Texas_Central = 32139 NAD83_Texas_South_Cen = 32140 NAD83_Texas_South = 32141 NAD83_Utah_North = 32142 NAD83_Utah_Central = 32143 NAD83_Utah_South = 32144 NAD83_Vermont = 32145 NAD83_Virginia_North = 32146 NAD83_Virginia_South = 32147 NAD83_Washington_North = 32148 NAD83_Washington_South = 32149 NAD83_West_Virginia_N = 32150 NAD83_West_Virginia_S = 32151 NAD83_Wisconsin_North = 32152 NAD83_Wisconsin_Cen = 32153 NAD83_Wisconsin_South = 32154 NAD83_Wyoming_East = 32155 NAD83_Wyoming_E_Cen = 32156 NAD83_Wyoming_W_Cen = 32157 NAD83_Wyoming_West = 32158 NAD83_Puerto_Rico_Virgin_Is = 32161 WGS72_UTM_zone_1N = 32201 WGS72_UTM_zone_2N = 32202 WGS72_UTM_zone_3N = 32203 WGS72_UTM_zone_4N = 32204 WGS72_UTM_zone_5N = 32205 WGS72_UTM_zone_6N = 32206 WGS72_UTM_zone_7N = 32207 WGS72_UTM_zone_8N = 32208 WGS72_UTM_zone_9N = 32209 WGS72_UTM_zone_10N = 32210 WGS72_UTM_zone_11N = 32211 WGS72_UTM_zone_12N = 32212 WGS72_UTM_zone_13N = 32213 WGS72_UTM_zone_14N = 32214 WGS72_UTM_zone_15N = 32215 WGS72_UTM_zone_16N = 32216 WGS72_UTM_zone_17N = 32217 WGS72_UTM_zone_18N = 32218 WGS72_UTM_zone_19N = 32219 WGS72_UTM_zone_20N = 32220 WGS72_UTM_zone_21N = 32221 WGS72_UTM_zone_22N = 32222 WGS72_UTM_zone_23N = 32223 WGS72_UTM_zone_24N = 32224 WGS72_UTM_zone_25N = 32225 WGS72_UTM_zone_26N = 32226 WGS72_UTM_zone_27N = 32227 WGS72_UTM_zone_28N = 32228 WGS72_UTM_zone_29N = 32229 WGS72_UTM_zone_30N = 32230 WGS72_UTM_zone_31N = 32231 WGS72_UTM_zone_32N = 32232 WGS72_UTM_zone_33N = 32233 WGS72_UTM_zone_34N = 32234 WGS72_UTM_zone_35N = 32235 WGS72_UTM_zone_36N = 32236 WGS72_UTM_zone_37N = 32237 WGS72_UTM_zone_38N = 32238 WGS72_UTM_zone_39N = 32239 WGS72_UTM_zone_40N = 32240 WGS72_UTM_zone_41N = 32241 WGS72_UTM_zone_42N = 32242 WGS72_UTM_zone_43N = 32243 WGS72_UTM_zone_44N = 32244 WGS72_UTM_zone_45N = 32245 WGS72_UTM_zone_46N = 32246 WGS72_UTM_zone_47N = 32247 WGS72_UTM_zone_48N = 32248 WGS72_UTM_zone_49N = 32249 WGS72_UTM_zone_50N = 32250 WGS72_UTM_zone_51N = 32251 WGS72_UTM_zone_52N = 32252 WGS72_UTM_zone_53N = 32253 WGS72_UTM_zone_54N = 32254 WGS72_UTM_zone_55N = 32255 WGS72_UTM_zone_56N = 32256 WGS72_UTM_zone_57N = 32257 WGS72_UTM_zone_58N = 32258 WGS72_UTM_zone_59N = 32259 WGS72_UTM_zone_60N = 32260 WGS72_UTM_zone_1S = 32301 WGS72_UTM_zone_2S = 32302 WGS72_UTM_zone_3S = 32303 WGS72_UTM_zone_4S = 32304 WGS72_UTM_zone_5S = 32305 WGS72_UTM_zone_6S = 32306 WGS72_UTM_zone_7S = 32307 WGS72_UTM_zone_8S = 32308 WGS72_UTM_zone_9S = 32309 WGS72_UTM_zone_10S = 32310 WGS72_UTM_zone_11S = 32311 WGS72_UTM_zone_12S = 32312 WGS72_UTM_zone_13S = 32313 WGS72_UTM_zone_14S = 32314 WGS72_UTM_zone_15S = 32315 WGS72_UTM_zone_16S = 32316 WGS72_UTM_zone_17S = 32317 WGS72_UTM_zone_18S = 32318 WGS72_UTM_zone_19S = 32319 WGS72_UTM_zone_20S = 32320 WGS72_UTM_zone_21S = 32321 WGS72_UTM_zone_22S = 32322 WGS72_UTM_zone_23S = 32323 WGS72_UTM_zone_24S = 32324 WGS72_UTM_zone_25S = 32325 WGS72_UTM_zone_26S = 32326 WGS72_UTM_zone_27S = 32327 WGS72_UTM_zone_28S = 32328 WGS72_UTM_zone_29S = 32329 WGS72_UTM_zone_30S = 32330 WGS72_UTM_zone_31S = 32331 WGS72_UTM_zone_32S = 32332 WGS72_UTM_zone_33S = 32333 WGS72_UTM_zone_34S = 32334 WGS72_UTM_zone_35S = 32335 WGS72_UTM_zone_36S = 32336 WGS72_UTM_zone_37S = 32337 WGS72_UTM_zone_38S = 32338 WGS72_UTM_zone_39S = 32339 WGS72_UTM_zone_40S = 32340 WGS72_UTM_zone_41S = 32341 WGS72_UTM_zone_42S = 32342 WGS72_UTM_zone_43S = 32343 WGS72_UTM_zone_44S = 32344 WGS72_UTM_zone_45S = 32345 WGS72_UTM_zone_46S = 32346 WGS72_UTM_zone_47S = 32347 WGS72_UTM_zone_48S = 32348 WGS72_UTM_zone_49S = 32349 WGS72_UTM_zone_50S = 32350 WGS72_UTM_zone_51S = 32351 WGS72_UTM_zone_52S = 32352 WGS72_UTM_zone_53S = 32353 WGS72_UTM_zone_54S = 32354 WGS72_UTM_zone_55S = 32355 WGS72_UTM_zone_56S = 32356 WGS72_UTM_zone_57S = 32357 WGS72_UTM_zone_58S = 32358 WGS72_UTM_zone_59S = 32359 WGS72_UTM_zone_60S = 32360 WGS72BE_UTM_zone_1N = 32401 WGS72BE_UTM_zone_2N = 32402 WGS72BE_UTM_zone_3N = 32403 WGS72BE_UTM_zone_4N = 32404 WGS72BE_UTM_zone_5N = 32405 WGS72BE_UTM_zone_6N = 32406 WGS72BE_UTM_zone_7N = 32407 WGS72BE_UTM_zone_8N = 32408 WGS72BE_UTM_zone_9N = 32409 WGS72BE_UTM_zone_10N = 32410 WGS72BE_UTM_zone_11N = 32411 WGS72BE_UTM_zone_12N = 32412 WGS72BE_UTM_zone_13N = 32413 WGS72BE_UTM_zone_14N = 32414 WGS72BE_UTM_zone_15N = 32415 WGS72BE_UTM_zone_16N = 32416 WGS72BE_UTM_zone_17N = 32417 WGS72BE_UTM_zone_18N = 32418 WGS72BE_UTM_zone_19N = 32419 WGS72BE_UTM_zone_20N = 32420 WGS72BE_UTM_zone_21N = 32421 WGS72BE_UTM_zone_22N = 32422 WGS72BE_UTM_zone_23N = 32423 WGS72BE_UTM_zone_24N = 32424 WGS72BE_UTM_zone_25N = 32425 WGS72BE_UTM_zone_26N = 32426 WGS72BE_UTM_zone_27N = 32427 WGS72BE_UTM_zone_28N = 32428 WGS72BE_UTM_zone_29N = 32429 WGS72BE_UTM_zone_30N = 32430 WGS72BE_UTM_zone_31N = 32431 WGS72BE_UTM_zone_32N = 32432 WGS72BE_UTM_zone_33N = 32433 WGS72BE_UTM_zone_34N = 32434 WGS72BE_UTM_zone_35N = 32435 WGS72BE_UTM_zone_36N = 32436 WGS72BE_UTM_zone_37N = 32437 WGS72BE_UTM_zone_38N = 32438 WGS72BE_UTM_zone_39N = 32439 WGS72BE_UTM_zone_40N = 32440 WGS72BE_UTM_zone_41N = 32441 WGS72BE_UTM_zone_42N = 32442 WGS72BE_UTM_zone_43N = 32443 WGS72BE_UTM_zone_44N = 32444 WGS72BE_UTM_zone_45N = 32445 WGS72BE_UTM_zone_46N = 32446 WGS72BE_UTM_zone_47N = 32447 WGS72BE_UTM_zone_48N = 32448 WGS72BE_UTM_zone_49N = 32449 WGS72BE_UTM_zone_50N = 32450 WGS72BE_UTM_zone_51N = 32451 WGS72BE_UTM_zone_52N = 32452 WGS72BE_UTM_zone_53N = 32453 WGS72BE_UTM_zone_54N = 32454 WGS72BE_UTM_zone_55N = 32455 WGS72BE_UTM_zone_56N = 32456 WGS72BE_UTM_zone_57N = 32457 WGS72BE_UTM_zone_58N = 32458 WGS72BE_UTM_zone_59N = 32459 WGS72BE_UTM_zone_60N = 32460 WGS72BE_UTM_zone_1S = 32501 WGS72BE_UTM_zone_2S = 32502 WGS72BE_UTM_zone_3S = 32503 WGS72BE_UTM_zone_4S = 32504 WGS72BE_UTM_zone_5S = 32505 WGS72BE_UTM_zone_6S = 32506 WGS72BE_UTM_zone_7S = 32507 WGS72BE_UTM_zone_8S = 32508 WGS72BE_UTM_zone_9S = 32509 WGS72BE_UTM_zone_10S = 32510 WGS72BE_UTM_zone_11S = 32511 WGS72BE_UTM_zone_12S = 32512 WGS72BE_UTM_zone_13S = 32513 WGS72BE_UTM_zone_14S = 32514 WGS72BE_UTM_zone_15S = 32515 WGS72BE_UTM_zone_16S = 32516 WGS72BE_UTM_zone_17S = 32517 WGS72BE_UTM_zone_18S = 32518 WGS72BE_UTM_zone_19S = 32519 WGS72BE_UTM_zone_20S = 32520 WGS72BE_UTM_zone_21S = 32521 WGS72BE_UTM_zone_22S = 32522 WGS72BE_UTM_zone_23S = 32523 WGS72BE_UTM_zone_24S = 32524 WGS72BE_UTM_zone_25S = 32525 WGS72BE_UTM_zone_26S = 32526 WGS72BE_UTM_zone_27S = 32527 WGS72BE_UTM_zone_28S = 32528 WGS72BE_UTM_zone_29S = 32529 WGS72BE_UTM_zone_30S = 32530 WGS72BE_UTM_zone_31S = 32531 WGS72BE_UTM_zone_32S = 32532 WGS72BE_UTM_zone_33S = 32533 WGS72BE_UTM_zone_34S = 32534 WGS72BE_UTM_zone_35S = 32535 WGS72BE_UTM_zone_36S = 32536 WGS72BE_UTM_zone_37S = 32537 WGS72BE_UTM_zone_38S = 32538 WGS72BE_UTM_zone_39S = 32539 WGS72BE_UTM_zone_40S = 32540 WGS72BE_UTM_zone_41S = 32541 WGS72BE_UTM_zone_42S = 32542 WGS72BE_UTM_zone_43S = 32543 WGS72BE_UTM_zone_44S = 32544 WGS72BE_UTM_zone_45S = 32545 WGS72BE_UTM_zone_46S = 32546 WGS72BE_UTM_zone_47S = 32547 WGS72BE_UTM_zone_48S = 32548 WGS72BE_UTM_zone_49S = 32549 WGS72BE_UTM_zone_50S = 32550 WGS72BE_UTM_zone_51S = 32551 WGS72BE_UTM_zone_52S = 32552 WGS72BE_UTM_zone_53S = 32553 WGS72BE_UTM_zone_54S = 32554 WGS72BE_UTM_zone_55S = 32555 WGS72BE_UTM_zone_56S = 32556 WGS72BE_UTM_zone_57S = 32557 WGS72BE_UTM_zone_58S = 32558 WGS72BE_UTM_zone_59S = 32559 WGS72BE_UTM_zone_60S = 32560 WGS84_UTM_zone_1N = 32601 WGS84_UTM_zone_2N = 32602 WGS84_UTM_zone_3N = 32603 WGS84_UTM_zone_4N = 32604 WGS84_UTM_zone_5N = 32605 WGS84_UTM_zone_6N = 32606 WGS84_UTM_zone_7N = 32607 WGS84_UTM_zone_8N = 32608 WGS84_UTM_zone_9N = 32609 WGS84_UTM_zone_10N = 32610 WGS84_UTM_zone_11N = 32611 WGS84_UTM_zone_12N = 32612 WGS84_UTM_zone_13N = 32613 WGS84_UTM_zone_14N = 32614 WGS84_UTM_zone_15N = 32615 WGS84_UTM_zone_16N = 32616 WGS84_UTM_zone_17N = 32617 WGS84_UTM_zone_18N = 32618 WGS84_UTM_zone_19N = 32619 WGS84_UTM_zone_20N = 32620 WGS84_UTM_zone_21N = 32621 WGS84_UTM_zone_22N = 32622 WGS84_UTM_zone_23N = 32623 WGS84_UTM_zone_24N = 32624 WGS84_UTM_zone_25N = 32625 WGS84_UTM_zone_26N = 32626 WGS84_UTM_zone_27N = 32627 WGS84_UTM_zone_28N = 32628 WGS84_UTM_zone_29N = 32629 WGS84_UTM_zone_30N = 32630 WGS84_UTM_zone_31N = 32631 WGS84_UTM_zone_32N = 32632 WGS84_UTM_zone_33N = 32633 WGS84_UTM_zone_34N = 32634 WGS84_UTM_zone_35N = 32635 WGS84_UTM_zone_36N = 32636 WGS84_UTM_zone_37N = 32637 WGS84_UTM_zone_38N = 32638 WGS84_UTM_zone_39N = 32639 WGS84_UTM_zone_40N = 32640 WGS84_UTM_zone_41N = 32641 WGS84_UTM_zone_42N = 32642 WGS84_UTM_zone_43N = 32643 WGS84_UTM_zone_44N = 32644 WGS84_UTM_zone_45N = 32645 WGS84_UTM_zone_46N = 32646 WGS84_UTM_zone_47N = 32647 WGS84_UTM_zone_48N = 32648 WGS84_UTM_zone_49N = 32649 WGS84_UTM_zone_50N = 32650 WGS84_UTM_zone_51N = 32651 WGS84_UTM_zone_52N = 32652 WGS84_UTM_zone_53N = 32653 WGS84_UTM_zone_54N = 32654 WGS84_UTM_zone_55N = 32655 WGS84_UTM_zone_56N = 32656 WGS84_UTM_zone_57N = 32657 WGS84_UTM_zone_58N = 32658 WGS84_UTM_zone_59N = 32659 WGS84_UTM_zone_60N = 32660 WGS84_UTM_zone_1S = 32701 WGS84_UTM_zone_2S = 32702 WGS84_UTM_zone_3S = 32703 WGS84_UTM_zone_4S = 32704 WGS84_UTM_zone_5S = 32705 WGS84_UTM_zone_6S = 32706 WGS84_UTM_zone_7S = 32707 WGS84_UTM_zone_8S = 32708 WGS84_UTM_zone_9S = 32709 WGS84_UTM_zone_10S = 32710 WGS84_UTM_zone_11S = 32711 WGS84_UTM_zone_12S = 32712 WGS84_UTM_zone_13S = 32713 WGS84_UTM_zone_14S = 32714 WGS84_UTM_zone_15S = 32715 WGS84_UTM_zone_16S = 32716 WGS84_UTM_zone_17S = 32717 WGS84_UTM_zone_18S = 32718 WGS84_UTM_zone_19S = 32719 WGS84_UTM_zone_20S = 32720 WGS84_UTM_zone_21S = 32721 WGS84_UTM_zone_22S = 32722 WGS84_UTM_zone_23S = 32723 WGS84_UTM_zone_24S = 32724 WGS84_UTM_zone_25S = 32725 WGS84_UTM_zone_26S = 32726 WGS84_UTM_zone_27S = 32727 WGS84_UTM_zone_28S = 32728 WGS84_UTM_zone_29S = 32729 WGS84_UTM_zone_30S = 32730 WGS84_UTM_zone_31S = 32731 WGS84_UTM_zone_32S = 32732 WGS84_UTM_zone_33S = 32733 WGS84_UTM_zone_34S = 32734 WGS84_UTM_zone_35S = 32735 WGS84_UTM_zone_36S = 32736 WGS84_UTM_zone_37S = 32737 WGS84_UTM_zone_38S = 32738 WGS84_UTM_zone_39S = 32739 WGS84_UTM_zone_40S = 32740 WGS84_UTM_zone_41S = 32741 WGS84_UTM_zone_42S = 32742 WGS84_UTM_zone_43S = 32743 WGS84_UTM_zone_44S = 32744 WGS84_UTM_zone_45S = 32745 WGS84_UTM_zone_46S = 32746 WGS84_UTM_zone_47S = 32747 WGS84_UTM_zone_48S = 32748 WGS84_UTM_zone_49S = 32749 WGS84_UTM_zone_50S = 32750 WGS84_UTM_zone_51S = 32751 WGS84_UTM_zone_52S = 32752 WGS84_UTM_zone_53S = 32753 WGS84_UTM_zone_54S = 32754 WGS84_UTM_zone_55S = 32755 WGS84_UTM_zone_56S = 32756 WGS84_UTM_zone_57S = 32757 WGS84_UTM_zone_58S = 32758 WGS84_UTM_zone_59S = 32759 WGS84_UTM_zone_60S = 32760 # New GGRS87_Greek_Grid = 2100 KKJ_Finland_zone_1 = 2391 KKJ_Finland_zone_2 = 2392 KKJ_Finland_zone_3 = 2393 KKJ_Finland_zone_4 = 2394 RT90_2_5_gon_W = 2400 Lietuvos_Koordinoei_Sistema_1994 = 2600 Estonian_Coordinate_System_of_1992 = 3300 HD72_EOV = 23700 Dealul_Piscului_1970_Stereo_70 = 31700 # Newer Hjorsey_1955_Lambert = 3053 ISN93_Lambert_1993 = 3057 ETRS89_Poland_CS2000_zone_5 = 2176 ETRS89_Poland_CS2000_zone_6 = 2177 ETRS89_Poland_CS2000_zone_7 = 2177 ETRS89_Poland_CS2000_zone_8 = 2178 ETRS89_Poland_CS92 = 2180 class GCSE(enum.IntEnum): """Unspecified GCS based on ellipsoid.""" Undefined = 0 User_Defined = 32767 Airy1830 = 4001 AiryModified1849 = 4002 AustralianNationalSpheroid = 4003 Bessel1841 = 4004 BesselModified = 4005 BesselNamibia = 4006 Clarke1858 = 4007 Clarke1866 = 4008 Clarke1866Michigan = 4009 Clarke1880_Benoit = 4010 Clarke1880_IGN = 4011 Clarke1880_RGS = 4012 Clarke1880_Arc = 4013 Clarke1880_SGA1922 = 4014 Everest1830_1937Adjustment = 4015 Everest1830_1967Definition = 4016 Everest1830_1975Definition = 4017 Everest1830Modified = 4018 GRS1980 = 4019 Helmert1906 = 4020 IndonesianNationalSpheroid = 4021 International1924 = 4022 International1967 = 4023 Krassowsky1940 = 4024 NWL9D = 4025 NWL10D = 4026 Plessis1817 = 4027 Struve1860 = 4028 WarOffice = 4029 WGS84 = 4030 GEM10C = 4031 OSU86F = 4032 OSU91A = 4033 Clarke1880 = 4034 Sphere = 4035 class GCS(enum.IntEnum): """Geographic CS Type Codes.""" Undefined = 0 User_Defined = 32767 Adindan = 4201 AGD66 = 4202 AGD84 = 4203 Ain_el_Abd = 4204 Afgooye = 4205 Agadez = 4206 Lisbon = 4207 Aratu = 4208 Arc_1950 = 4209 Arc_1960 = 4210 Batavia = 4211 Barbados = 4212 Beduaram = 4213 Beijing_1954 = 4214 Belge_1950 = 4215 Bermuda_1957 = 4216 Bern_1898 = 4217 Bogota = 4218 Bukit_Rimpah = 4219 Camacupa = 4220 Campo_Inchauspe = 4221 Cape = 4222 Carthage = 4223 Chua = 4224 Corrego_Alegre = 4225 Cote_d_Ivoire = 4226 Deir_ez_Zor = 4227 Douala = 4228 Egypt_1907 = 4229 ED50 = 4230 ED87 = 4231 Fahud = 4232 Gandajika_1970 = 4233 Garoua = 4234 Guyane_Francaise = 4235 Hu_Tzu_Shan = 4236 HD72 = 4237 ID74 = 4238 Indian_1954 = 4239 Indian_1975 = 4240 Jamaica_1875 = 4241 JAD69 = 4242 Kalianpur = 4243 Kandawala = 4244 Kertau = 4245 KOC = 4246 La_Canoa = 4247 PSAD56 = 4248 Lake = 4249 Leigon = 4250 Liberia_1964 = 4251 Lome = 4252 Luzon_1911 = 4253 Hito_XVIII_1963 = 4254 Herat_North = 4255 Mahe_1971 = 4256 Makassar = 4257 EUREF89 = 4258 Malongo_1987 = 4259 Manoca = 4260 Merchich = 4261 Massawa = 4262 Minna = 4263 Mhast = 4264 Monte_Mario = 4265 M_poraloko = 4266 NAD27 = 4267 NAD_Michigan = 4268 NAD83 = 4269 Nahrwan_1967 = 4270 Naparima_1972 = 4271 GD49 = 4272 NGO_1948 = 4273 Datum_73 = 4274 NTF = 4275 NSWC_9Z_2 = 4276 OSGB_1936 = 4277 OSGB70 = 4278 OS_SN80 = 4279 Padang = 4280 Palestine_1923 = 4281 Pointe_Noire = 4282 GDA94 = 4283 Pulkovo_1942 = 4284 Qatar = 4285 Qatar_1948 = 4286 Qornoq = 4287 Loma_Quintana = 4288 Amersfoort = 4289 RT38 = 4290 SAD69 = 4291 Sapper_Hill_1943 = 4292 Schwarzeck = 4293 Segora = 4294 Serindung = 4295 Sudan = 4296 Tananarive = 4297 Timbalai_1948 = 4298 TM65 = 4299 TM75 = 4300 Tokyo = 4301 Trinidad_1903 = 4302 TC_1948 = 4303 Voirol_1875 = 4304 Voirol_Unifie = 4305 Bern_1938 = 4306 Nord_Sahara_1959 = 4307 Stockholm_1938 = 4308 Yacare = 4309 Yoff = 4310 Zanderij = 4311 MGI = 4312 Belge_1972 = 4313 DHDN = 4314 Conakry_1905 = 4315 WGS_72 = 4322 WGS_72BE = 4324 WGS_84 = 4326 Bern_1898_Bern = 4801 Bogota_Bogota = 4802 Lisbon_Lisbon = 4803 Makassar_Jakarta = 4804 MGI_Ferro = 4805 Monte_Mario_Rome = 4806 NTF_Paris = 4807 Padang_Jakarta = 4808 Belge_1950_Brussels = 4809 Tananarive_Paris = 4810 Voirol_1875_Paris = 4811 Voirol_Unifie_Paris = 4812 Batavia_Jakarta = 4813 ATF_Paris = 4901 NDG_Paris = 4902 # New GCS Greek = 4120 GGRS87 = 4121 KKJ = 4123 RT90 = 4124 EST92 = 4133 Dealul_Piscului_1970 = 4317 Greek_Athens = 4815 class Ellipse(enum.IntEnum): """Ellipsoid Codes.""" Undefined = 0 User_Defined = 32767 Airy_1830 = 7001 Airy_Modified_1849 = 7002 Australian_National_Spheroid = 7003 Bessel_1841 = 7004 Bessel_Modified = 7005 Bessel_Namibia = 7006 Clarke_1858 = 7007 Clarke_1866 = 7008 Clarke_1866_Michigan = 7009 Clarke_1880_Benoit = 7010 Clarke_1880_IGN = 7011 Clarke_1880_RGS = 7012 Clarke_1880_Arc = 7013 Clarke_1880_SGA_1922 = 7014 Everest_1830_1937_Adjustment = 7015 Everest_1830_1967_Definition = 7016 Everest_1830_1975_Definition = 7017 Everest_1830_Modified = 7018 GRS_1980 = 7019 Helmert_1906 = 7020 Indonesian_National_Spheroid = 7021 International_1924 = 7022 International_1967 = 7023 Krassowsky_1940 = 7024 NWL_9D = 7025 NWL_10D = 7026 Plessis_1817 = 7027 Struve_1860 = 7028 War_Office = 7029 WGS_84 = 7030 GEM_10C = 7031 OSU86F = 7032 OSU91A = 7033 Clarke_1880 = 7034 Sphere = 7035 class DatumE(enum.IntEnum): """Ellipsoid-Only Geodetic Datum Codes.""" Undefined = 0 User_Defined = 32767 Airy1830 = 6001 AiryModified1849 = 6002 AustralianNationalSpheroid = 6003 Bessel1841 = 6004 BesselModified = 6005 BesselNamibia = 6006 Clarke1858 = 6007 Clarke1866 = 6008 Clarke1866Michigan = 6009 Clarke1880_Benoit = 6010 Clarke1880_IGN = 6011 Clarke1880_RGS = 6012 Clarke1880_Arc = 6013 Clarke1880_SGA1922 = 6014 Everest1830_1937Adjustment = 6015 Everest1830_1967Definition = 6016 Everest1830_1975Definition = 6017 Everest1830Modified = 6018 GRS1980 = 6019 Helmert1906 = 6020 IndonesianNationalSpheroid = 6021 International1924 = 6022 International1967 = 6023 Krassowsky1960 = 6024 NWL9D = 6025 NWL10D = 6026 Plessis1817 = 6027 Struve1860 = 6028 WarOffice = 6029 WGS84 = 6030 GEM10C = 6031 OSU86F = 6032 OSU91A = 6033 Clarke1880 = 6034 Sphere = 6035 class Datum(enum.IntEnum): """Geodetic Datum Codes.""" Undefined = 0 User_Defined = 32767 Adindan = 6201 Australian_Geodetic_Datum_1966 = 6202 Australian_Geodetic_Datum_1984 = 6203 Ain_el_Abd_1970 = 6204 Afgooye = 6205 Agadez = 6206 Lisbon = 6207 Aratu = 6208 Arc_1950 = 6209 Arc_1960 = 6210 Batavia = 6211 Barbados = 6212 Beduaram = 6213 Beijing_1954 = 6214 Reseau_National_Belge_1950 = 6215 Bermuda_1957 = 6216 Bern_1898 = 6217 Bogota = 6218 Bukit_Rimpah = 6219 Camacupa = 6220 Campo_Inchauspe = 6221 Cape = 6222 Carthage = 6223 Chua = 6224 Corrego_Alegre = 6225 Cote_d_Ivoire = 6226 Deir_ez_Zor = 6227 Douala = 6228 Egypt_1907 = 6229 European_Datum_1950 = 6230 European_Datum_1987 = 6231 Fahud = 6232 Gandajika_1970 = 6233 Garoua = 6234 Guyane_Francaise = 6235 Hu_Tzu_Shan = 6236 Hungarian_Datum_1972 = 6237 Indonesian_Datum_1974 = 6238 Indian_1954 = 6239 Indian_1975 = 6240 Jamaica_1875 = 6241 Jamaica_1969 = 6242 Kalianpur = 6243 Kandawala = 6244 Kertau = 6245 Kuwait_Oil_Company = 6246 La_Canoa = 6247 Provisional_S_American_Datum_1956 = 6248 Lake = 6249 Leigon = 6250 Liberia_1964 = 6251 Lome = 6252 Luzon_1911 = 6253 Hito_XVIII_1963 = 6254 Herat_North = 6255 Mahe_1971 = 6256 Makassar = 6257 European_Reference_System_1989 = 6258 Malongo_1987 = 6259 Manoca = 6260 Merchich = 6261 Massawa = 6262 Minna = 6263 Mhast = 6264 Monte_Mario = 6265 M_poraloko = 6266 North_American_Datum_1927 = 6267 NAD_Michigan = 6268 North_American_Datum_1983 = 6269 Nahrwan_1967 = 6270 Naparima_1972 = 6271 New_Zealand_Geodetic_Datum_1949 = 6272 NGO_1948 = 6273 Datum_73 = 6274 Nouvelle_Triangulation_Francaise = 6275 NSWC_9Z_2 = 6276 OSGB_1936 = 6277 OSGB_1970_SN = 6278 OS_SN_1980 = 6279 Padang_1884 = 6280 Palestine_1923 = 6281 Pointe_Noire = 6282 Geocentric_Datum_of_Australia_1994 = 6283 Pulkovo_1942 = 6284 Qatar = 6285 Qatar_1948 = 6286 Qornoq = 6287 Loma_Quintana = 6288 Amersfoort = 6289 RT38 = 6290 South_American_Datum_1969 = 6291 Sapper_Hill_1943 = 6292 Schwarzeck = 6293 Segora = 6294 Serindung = 6295 Sudan = 6296 Tananarive_1925 = 6297 Timbalai_1948 = 6298 TM65 = 6299 TM75 = 6300 Tokyo = 6301 Trinidad_1903 = 6302 Trucial_Coast_1948 = 6303 Voirol_1875 = 6304 Voirol_Unifie_1960 = 6305 Bern_1938 = 6306 Nord_Sahara_1959 = 6307 Stockholm_1938 = 6308 Yacare = 6309 Yoff = 6310 Zanderij = 6311 Militar_Geographische_Institut = 6312 Reseau_National_Belge_1972 = 6313 Deutsche_Hauptdreiecksnetz = 6314 Conakry_1905 = 6315 Dealul_Piscului_1930 = 6316 Dealul_Piscului_1970 = 6317 WGS72 = 6322 WGS72_Transit_Broadcast_Ephemeris = 6324 WGS84 = 6326 Ancienne_Triangulation_Francaise = 6901 Nord_de_Guerre = 6902 class ModelType(enum.IntEnum): """Model Type Codes.""" Undefined = 0 User_Defined = 32767 Projected = 1 Geographic = 2 Geocentric = 3 class RasterPixel(enum.IntEnum): """Raster Type Codes.""" Undefined = 0 User_Defined = 32767 IsArea = 1 IsPoint = 2 class Linear(enum.IntEnum): """Linear Units.""" Undefined = 0 User_Defined = 32767 Meter = 9001 Foot = 9002 Foot_US_Survey = 9003 Foot_Modified_American = 9004 Foot_Clarke = 9005 Foot_Indian = 9006 Link = 9007 Link_Benoit = 9008 Link_Sears = 9009 Chain_Benoit = 9010 Chain_Sears = 9011 Yard_Sears = 9012 Yard_Indian = 9013 Fathom = 9014 Mile_International_Nautical = 9015 class Angular(enum.IntEnum): """Angular Units.""" Undefined = 0 User_Defined = 32767 Radian = 9101 Degree = 9102 Arc_Minute = 9103 Arc_Second = 9104 Grad = 9105 Gon = 9106 DMS = 9107 DMS_Hemisphere = 9108 class PM(enum.IntEnum): """Prime Meridian Codes.""" Undefined = 0 User_Defined = 32767 Greenwich = 8901 Lisbon = 8902 Paris = 8903 Bogota = 8904 Madrid = 8905 Rome = 8906 Bern = 8907 Jakarta = 8908 Ferro = 8909 Brussels = 8910 Stockholm = 8911 class CT(enum.IntEnum): """Coordinate Transformation Codes.""" Undefined = 0 User_Defined = 32767 TransverseMercator = 1 TransvMercator_Modified_Alaska = 2 ObliqueMercator = 3 ObliqueMercator_Laborde = 4 ObliqueMercator_Rosenmund = 5 ObliqueMercator_Spherical = 6 Mercator = 7 LambertConfConic_2SP = 8 LambertConfConic_Helmert = 9 LambertAzimEqualArea = 10 AlbersEqualArea = 11 AzimuthalEquidistant = 12 EquidistantConic = 13 Stereographic = 14 PolarStereographic = 15 ObliqueStereographic = 16 Equirectangular = 17 CassiniSoldner = 18 Gnomonic = 19 MillerCylindrical = 20 Orthographic = 21 Polyconic = 22 Robinson = 23 Sinusoidal = 24 VanDerGrinten = 25 NewZealandMapGrid = 26 TransvMercator_SouthOriented = 27 CylindricalEqualArea = 28 HotineObliqueMercatorAzimuthCenter = 9815 class VertCS(enum.IntEnum): """Vertical CS Type Codes.""" Undefined = 0 User_Defined = 32767 Airy_1830_ellipsoid = 5001 Airy_Modified_1849_ellipsoid = 5002 ANS_ellipsoid = 5003 Bessel_1841_ellipsoid = 5004 Bessel_Modified_ellipsoid = 5005 Bessel_Namibia_ellipsoid = 5006 Clarke_1858_ellipsoid = 5007 Clarke_1866_ellipsoid = 5008 Clarke_1880_Benoit_ellipsoid = 5010 Clarke_1880_IGN_ellipsoid = 5011 Clarke_1880_RGS_ellipsoid = 5012 Clarke_1880_Arc_ellipsoid = 5013 Clarke_1880_SGA_1922_ellipsoid = 5014 Everest_1830_1937_Adjustment_ellipsoid = 5015 Everest_1830_1967_Definition_ellipsoid = 5016 Everest_1830_1975_Definition_ellipsoid = 5017 Everest_1830_Modified_ellipsoid = 5018 GRS_1980_ellipsoid = 5019 Helmert_1906_ellipsoid = 5020 INS_ellipsoid = 5021 International_1924_ellipsoid = 5022 International_1967_ellipsoid = 5023 Krassowsky_1940_ellipsoid = 5024 NWL_9D_ellipsoid = 5025 NWL_10D_ellipsoid = 5026 Plessis_1817_ellipsoid = 5027 Struve_1860_ellipsoid = 5028 War_Office_ellipsoid = 5029 WGS_84_ellipsoid = 5030 GEM_10C_ellipsoid = 5031 OSU86F_ellipsoid = 5032 OSU91A_ellipsoid = 5033 # Orthometric Vertical CS Newlyn = 5101 North_American_Vertical_Datum_1929 = 5102 North_American_Vertical_Datum_1988 = 5103 Yellow_Sea_1956 = 5104 Baltic_Sea = 5105 Caspian_Sea = 5106 GEO_CODES: dict[int, type[enum.IntEnum]] = { # map :py:class:`GeoKeys` to GeoTIFF codes GeoKeys.GTModelTypeGeoKey: ModelType, GeoKeys.GTRasterTypeGeoKey: RasterPixel, GeoKeys.GeographicTypeGeoKey: GCS, GeoKeys.GeogPrimeMeridianGeoKey: PM, GeoKeys.GeogLinearUnitsGeoKey: Linear, GeoKeys.GeogAngularUnitsGeoKey: Angular, GeoKeys.GeogEllipsoidGeoKey: Ellipse, GeoKeys.GeogAzimuthUnitsGeoKey: Angular, GeoKeys.ProjectedCSTypeGeoKey: PCS, GeoKeys.ProjectionGeoKey: Proj, GeoKeys.ProjCoordTransGeoKey: CT, GeoKeys.ProjLinearUnitsGeoKey: Linear, GeoKeys.VerticalCSTypeGeoKey: VertCS, # GeoKeys.VerticalDatumGeoKey: VertCS, GeoKeys.VerticalUnitsGeoKey: Linear, } ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1726764944.0 tifffile-2025.3.30/tifffile/lsm2bin.py0000666000000000000000000000136714673053620014275 0ustar00#!/usr/bin/env python3 # tifffile/lsm2bin.py """Convert TZCYX LSM file to series of BIN files. Usage: ``lsm2bin lsm_filename [bin_filename]`` """ from __future__ import annotations import sys try: from .tifffile import lsm2bin except ImportError: try: from tifffile.tifffile import lsm2bin except ImportError: from tifffile import lsm2bin def main(argv: list[str] | None = None) -> int: """Lsm2bin command line usage main function.""" if argv is None: argv = sys.argv if len(argv) > 1: lsm2bin(argv[1], argv[2] if len(argv) > 2 else None) else: print() print(__doc__.strip()) return 0 if __name__ == '__main__': sys.exit(main()) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1735692636.0 tifffile-2025.3.30/tifffile/numcodecs.py0000666000000000000000000001354614735110534014706 0ustar00# tifffile/numcodecs.py # Copyright (c) 2021-2025, Christoph Gohlke # All rights reserved. # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions are met: # # 1. Redistributions of source code must retain the above copyright notice, # this list of conditions and the following disclaimer. # # 2. Redistributions in binary form must reproduce the above copyright notice, # this list of conditions and the following disclaimer in the documentation # and/or other materials provided with the distribution. # # 3. Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived from # this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" # AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE # IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE # ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE # LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR # CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF # SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS # INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN # CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) # ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE # POSSIBILITY OF SUCH DAMAGE. """TIFF codec for the Numcodecs package.""" from __future__ import annotations __all__ = ['register_codec', 'Tiff'] from io import BytesIO from typing import TYPE_CHECKING from numcodecs import registry from numcodecs.abc import Codec from .tifffile import TiffFile, TiffWriter if TYPE_CHECKING: from collections.abc import Iterable, Sequence from typing import Any from .tifffile import ( COMPRESSION, EXTRASAMPLE, PHOTOMETRIC, PLANARCONFIG, PREDICTOR, ByteOrder, TagTuple, ) class Tiff(Codec): # type: ignore[misc] """TIFF codec for Numcodecs.""" codec_id = 'tifffile' def __init__( self, # TiffFile.asarray key: int | slice | Iterable[int] | None = None, series: int | None = None, level: int | None = None, # TiffWriter bigtiff: bool = False, byteorder: ByteOrder | None = None, imagej: bool = False, ome: bool | None = None, # TiffWriter.write photometric: PHOTOMETRIC | int | str | None = None, planarconfig: PLANARCONFIG | int | str | None = None, extrasamples: Sequence[EXTRASAMPLE | int | str] | None = None, volumetric: bool = False, tile: Sequence[int] | None = None, rowsperstrip: int | None = None, compression: COMPRESSION | int | str | None = None, compressionargs: dict[str, Any] | None = None, predictor: PREDICTOR | int | str | bool | None = None, subsampling: tuple[int, int] | None = None, metadata: dict[str, Any] | None = {}, extratags: Sequence[TagTuple] | None = None, truncate: bool = False, maxworkers: int | None = None, ) -> None: self.key = key self.series = series self.level = level self.bigtiff = bigtiff self.byteorder = byteorder self.imagej = imagej self.ome = ome self.photometric = photometric self.planarconfig = planarconfig self.extrasamples = extrasamples self.volumetric = volumetric self.tile = tile self.rowsperstrip = rowsperstrip self.compression = compression self.compressionargs = compressionargs self.predictor = predictor self.subsampling = subsampling self.metadata = metadata self.extratags = extratags self.truncate = truncate self.maxworkers = maxworkers def encode(self, buf: Any) -> bytes: """Return TIFF file as bytes.""" with BytesIO() as fh: with TiffWriter( fh, bigtiff=self.bigtiff, byteorder=self.byteorder, imagej=self.imagej, ome=self.ome, ) as tif: tif.write( buf, photometric=self.photometric, planarconfig=self.planarconfig, extrasamples=self.extrasamples, volumetric=self.volumetric, tile=self.tile, rowsperstrip=self.rowsperstrip, compression=self.compression, compressionargs=self.compressionargs, predictor=self.predictor, subsampling=self.subsampling, metadata=self.metadata, extratags=self.extratags, truncate=self.truncate, maxworkers=self.maxworkers, ) result = fh.getvalue() return result def decode(self, buf: Any, out: Any = None) -> Any: """Return decoded image as NumPy array.""" with BytesIO(buf) as fh: with TiffFile(fh) as tif: result = tif.asarray( key=self.key, series=self.series, level=self.level, maxworkers=self.maxworkers, out=out, ) return result def register_codec(cls: Codec = Tiff, codec_id: str | None = None) -> None: """Register :py:class:`Tiff` codec with Numcodecs.""" registry.register_codec(cls, codec_id=codec_id) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1691712650.0 tifffile-2025.3.30/tifffile/py.typed0000666000000000000000000000000014465276212014036 0ustar00././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1726764034.0 tifffile-2025.3.30/tifffile/tiff2fsspec.py0000666000000000000000000000464514673052002015140 0ustar00#!/usr/bin/env python3 # tifffile/tiff2fsspec.py """Write fsspec ReferenceFileSystem for TIFF file. positional arguments: tifffile path to the local TIFF input file url remote URL of TIFF file without file name optional arguments: -h, --help show this help message and exit --out OUT path to the JSON output file --series SERIES index of series in file --level LEVEL index of level in series --key KEY index of page in file or series --chunkmode CHUNKMODE mode used for chunking {None, pages} For example: ``tiff2fsspec ./test.ome.tif https://server.com/path/`` """ from __future__ import annotations import argparse try: from .tifffile import tiff2fsspec except ImportError: try: from tifffile.tifffile import tiff2fsspec except ImportError: from tifffile import tiff2fsspec def main() -> int: """Tiff2fsspec command line usage main function.""" parser = argparse.ArgumentParser( 'tiff2fsspec', description='Write fsspec ReferenceFileSystem for TIFF file.', ) parser.add_argument( 'tifffile', type=str, help='path to the local TIFF input file' ) parser.add_argument( 'url', type=str, help='remote URL of TIFF file without file name' ) parser.add_argument( '--out', type=str, default=None, help='path to the JSON output file' ) parser.add_argument( '--series', type=int, default=None, help='index of series in file' ) parser.add_argument( '--level', type=int, default=None, help='index of level in series' ) parser.add_argument( '--key', type=int, default=None, help='index of page in file or series' ) parser.add_argument( '--chunkmode', type=int, default=None, help='mode used for chunking {None, pages}', ) parser.add_argument( '--ver', type=int, default=None, help='version of ReferenceFileSystem' ) args = parser.parse_args() tiff2fsspec( args.tifffile, args.url, out=args.out, key=args.key, series=args.series, level=args.level, chunkmode=args.chunkmode, version=args.ver, ) return 0 if __name__ == '__main__': import sys sys.exit(main()) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1727723905.0 tifffile-2025.3.30/tifffile/tiffcomment.py0000666000000000000000000000307614676574601015253 0ustar00#!/usr/bin/env python3 # tifffile/tiffcomment.py """Print or replace ImageDescription in first page of TIFF file. Usage: ``tiffcomment [--set comment] file`` """ from __future__ import annotations import os import sys try: from .tifffile import tiffcomment except ImportError: try: from tifffile.tifffile import tiffcomment except ImportError: from tifffile import tiffcomment def main(argv: list[str] | None = None) -> int: """Tiffcomment command line usage main function.""" comment: str | bytes | None if argv is None: argv = sys.argv if len(argv) > 2 and argv[1] in '--set': comment = argv[2] files = argv[3:] else: comment = None files = argv[1:] if len(files) == 0 or any(f.startswith('-') for f in files): print() print(__doc__.strip()) return 0 if comment is None: pass elif os.path.exists(comment): with open(comment, 'rb') as fh: comment = fh.read() else: try: comment = comment.encode('ascii') except UnicodeEncodeError as exc: print(f'{exc}') assert isinstance(comment, str) comment = comment.encode() for file in files: try: result = tiffcomment(file, comment) except Exception as exc: print(f'{file}: {exc}') else: if result: print(result) return 0 if __name__ == '__main__': sys.exit(main()) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1743283642.0 tifffile-2025.3.30/tifffile/tifffile.py0000666000000000000000000333273714772062672014541 0ustar00# tifffile.py # Copyright (c) 2008-2025, Christoph Gohlke # All rights reserved. # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions are met: # # 1. Redistributions of source code must retain the above copyright notice, # this list of conditions and the following disclaimer. # # 2. Redistributions in binary form must reproduce the above copyright notice, # this list of conditions and the following disclaimer in the documentation # and/or other materials provided with the distribution. # # 3. Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived from # this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" # AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE # IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE # ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE # LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR # CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF # SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS # INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN # CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) # ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE # POSSIBILITY OF SUCH DAMAGE. r"""Read and write TIFF files. Tifffile is a Python library to (1) store NumPy arrays in TIFF (Tagged Image File Format) files, and (2) read image and metadata from TIFF-like files used in bioimaging. Image and metadata can be read from TIFF, BigTIFF, OME-TIFF, GeoTIFF, Adobe DNG, ZIF (Zoomable Image File Format), MetaMorph STK, Zeiss LSM, ImageJ hyperstack, Micro-Manager MMStack and NDTiff, SGI, NIHImage, Olympus FluoView and SIS, ScanImage, Molecular Dynamics GEL, Aperio SVS, Leica SCN, Roche BIF, PerkinElmer QPTIFF (QPI, PKI), Hamamatsu NDPI, Argos AVS, and Philips DP formatted files. Image data can be read as NumPy arrays or Zarr 2 arrays/groups from strips, tiles, pages (IFDs), SubIFDs, higher-order series, and pyramidal levels. Image data can be written to TIFF, BigTIFF, OME-TIFF, and ImageJ hyperstack compatible files in multi-page, volumetric, pyramidal, memory-mappable, tiled, predicted, or compressed form. Many compression and predictor schemes are supported via the imagecodecs library, including LZW, PackBits, Deflate, PIXTIFF, LZMA, LERC, Zstd, JPEG (8 and 12-bit, lossless), JPEG 2000, JPEG XR, JPEG XL, WebP, PNG, EER, Jetraw, 24-bit floating-point, and horizontal differencing. Tifffile can also be used to inspect TIFF structures, read image data from multi-dimensional file sequences, write fsspec ReferenceFileSystem for TIFF files and image file sequences, patch TIFF tag values, and parse many proprietary metadata formats. :Author: `Christoph Gohlke `_ :License: BSD 3-Clause :Version: 2025.3.30 :DOI: `10.5281/zenodo.6795860 `_ Quickstart ---------- Install the tifffile package and all dependencies from the `Python Package Index `_:: python -m pip install -U tifffile[all] Tifffile is also available in other package repositories such as Anaconda, Debian, and MSYS2. The tifffile library is type annotated and documented via docstrings:: python -c "import tifffile; help(tifffile)" Tifffile can be used as a console script to inspect and preview TIFF files:: python -m tifffile --help See `Examples`_ for using the programming interface. Source code and support are available on `GitHub `_. Support is also provided on the `image.sc `_ forum. Requirements ------------ This revision was tested with the following requirements and dependencies (other versions may work): - `CPython `_ 3.10.11, 3.11.9, 3.12.9, 3.13.2 64-bit - `NumPy `_ 2.2.4 - `Imagecodecs `_ 2025.3.30 (required for encoding or decoding LZW, JPEG, etc. compressed segments) - `Matplotlib `_ 3.10.1 (required for plotting) - `Lxml `_ 5.3.1 (required only for validating and printing XML) - `Zarr `_ 2.18.5 (required only for opening Zarr stores; Zarr 3 is not compatible) - `Fsspec `_ 2025.2.0 (required only for opening ReferenceFileSystem files) Revisions --------- 2025.3.30 - Pass 5110 tests. - Fix for imagecodecs 2025.3.30. 2025.3.13 - Change bytes2str to decode only up to first NULL character (breaking). - Remove stripnull function calls to reduce overhead (#285). - Deprecate stripnull function. 2025.2.18 - Fix julian_datetime milliseconds (#283). - Remove deprecated dtype arguments from imread and FileSequence (breaking). - Remove deprecated imsave and TiffWriter.save function/method (breaking). - Remove deprecated option to pass multiple values to compression (breaking). - Remove deprecated option to pass unit to resolution (breaking). - Remove deprecated enums from TIFF namespace (breaking). - Remove deprecated lazyattr and squeeze_axes functions (breaking). 2025.1.10 - Improve type hints. - Deprecate Python 3.10. 2024.12.12 - Read PlaneProperty from STK UIC1Tag (#280). - Allow 'None' as alias for COMPRESSION.NONE and PREDICTOR.NONE (#274). - Zarr 3 is not supported (#272). 2024.9.20 - Fix writing colormap to ImageJ files (breaking). - Improve typing. - Drop support for Python 3.9. 2024.8.30 - Support writing OME Dataset and some StructuredAnnotations elements. 2024.8.28 - Fix LSM scan types and dimension orders (#269, breaking). - Use IO[bytes] instead of BinaryIO for typing (#268). 2024.8.24 - Do not remove trailing length-1 dimension writing non-shaped file (breaking). - Fix writing OME-TIFF with certain modulo axes orders. - Make imshow NaN aware. 2024.8.10 - Relax bitspersample check for JPEG, JPEG2K, and JPEGXL compression (#265). 2024.7.24 - Fix reading contiguous multi-page series via Zarr store (#67). 2024.7.21 - Fix integer overflow in product function caused by numpy types. - Allow tag reader functions to fail. 2024.7.2 - Enable memmap to create empty files with non-native byte order. - Deprecate Python 3.9, support Python 3.13. 2024.6.18 - Ensure TiffPage.nodata is castable to dtype (breaking, #260). - Support Argos AVS slides. 2024.5.22 - Derive TiffPages, TiffPageSeries, FileSequence, StoredShape from Sequence. - Truncate circular IFD chain, do not raise TiffFileError (breaking). - Deprecate access to TiffPages.pages and FileSequence.files. - Enable DeprecationWarning for enums in TIFF namespace. - Remove some deprecated code (breaking). - Add iccprofile property to TiffPage and parameter to TiffWriter.write. - Do not detect VSI as SIS format. - Limit length of logged exception messages. - Fix docstring examples not correctly rendered on GitHub (#254, #255). 2024.5.10 - Support reading JPEGXL compression in DNG 1.7. - Read invalid TIFF created by IDEAS software. 2024.5.3 - Fix reading incompletely written LSM. - Fix reading Philips DP with extra rows of tiles (#253, breaking). 2024.4.24 - Fix compatibility issue with numpy 2 (#252). 2024.4.18 - Fix write_fsspec when last row of tiles is missing in Philips slide (#249). - Add option not to quote file names in write_fsspec. - Allow compressing bilevel images with deflate, LZMA, and Zstd. 2024.2.12 - Deprecate dtype, add chunkdtype parameter in FileSequence.asarray. - Add imreadargs parameters passed to FileSequence.imread. 2024.1.30 - Fix compatibility issue with numpy 2 (#238). - Enable DeprecationWarning for tuple compression argument. - Parse sequence of numbers in xml2dict. 2023.12.9 - … Refer to the CHANGES file for older revisions. Notes ----- TIFF, the Tagged Image File Format, was created by the Aldus Corporation and Adobe Systems Incorporated. Tifffile supports a subset of the TIFF6 specification, mainly 8, 16, 32, and 64-bit integer, 16, 32, and 64-bit float, grayscale and multi-sample images. Specifically, CCITT and OJPEG compression, chroma subsampling without JPEG compression, color space transformations, samples with differing types, or IPTC, ICC, and XMP metadata are not implemented. Besides classic TIFF, tifffile supports several TIFF-like formats that do not strictly adhere to the TIFF6 specification. Some formats allow file and data sizes to exceed the 4 GB limit of the classic TIFF: - **BigTIFF** is identified by version number 43 and uses different file header, IFD, and tag structures with 64-bit offsets. The format also adds 64-bit data types. Tifffile can read and write BigTIFF files. - **ImageJ hyperstacks** store all image data, which may exceed 4 GB, contiguously after the first IFD. Files > 4 GB contain one IFD only. The size and shape of the up to 6-dimensional image data can be determined from the ImageDescription tag of the first IFD, which is Latin-1 encoded. Tifffile can read and write ImageJ hyperstacks. - **OME-TIFF** files store up to 8-dimensional image data in one or multiple TIFF or BigTIFF files. The UTF-8 encoded OME-XML metadata found in the ImageDescription tag of the first IFD defines the position of TIFF IFDs in the high-dimensional image data. Tifffile can read OME-TIFF files (except multi-file pyramidal) and write NumPy arrays to single-file OME-TIFF. - **Micro-Manager NDTiff** stores multi-dimensional image data in one or more classic TIFF files. Metadata contained in a separate NDTiff.index binary file defines the position of the TIFF IFDs in the image array. Each TIFF file also contains metadata in a non-TIFF binary structure at offset 8. Downsampled image data of pyramidal datasets are stored in separate folders. Tifffile can read NDTiff files. Version 0 and 1 series, tiling, stitching, and multi-resolution pyramids are not supported. - **Micro-Manager MMStack** stores 6-dimensional image data in one or more classic TIFF files. Metadata contained in non-TIFF binary structures and JSON strings define the image stack dimensions and the position of the image frame data in the file and the image stack. The TIFF structures and metadata are often corrupted or wrong. Tifffile can read MMStack files. - **Carl Zeiss LSM** files store all IFDs below 4 GB and wrap around 32-bit StripOffsets pointing to image data above 4 GB. The StripOffsets of each series and position require separate unwrapping. The StripByteCounts tag contains the number of bytes for the uncompressed data. Tifffile can read LSM files of any size. - **MetaMorph Stack, STK** files contain additional image planes stored contiguously after the image data of the first page. The total number of planes is equal to the count of the UIC2tag. Tifffile can read STK files. - **ZIF**, the Zoomable Image File format, is a subspecification of BigTIFF with SGI's ImageDepth extension and additional compression schemes. Only little-endian, tiled, interleaved, 8-bit per sample images with JPEG, PNG, JPEG XR, and JPEG 2000 compression are allowed. Tifffile can read and write ZIF files. - **Hamamatsu NDPI** files use some 64-bit offsets in the file header, IFD, and tag structures. Single, LONG typed tag values can exceed 32-bit. The high bytes of 64-bit tag values and offsets are stored after IFD structures. Tifffile can read NDPI files > 4 GB. JPEG compressed segments with dimensions >65530 or missing restart markers cannot be decoded with common JPEG libraries. Tifffile works around this limitation by separately decoding the MCUs between restart markers, which performs poorly. BitsPerSample, SamplesPerPixel, and PhotometricInterpretation tags may contain wrong values, which can be corrected using the value of tag 65441. - **Philips TIFF** slides store padded ImageWidth and ImageLength tag values for tiled pages. The values can be corrected using the DICOM_PIXEL_SPACING attributes of the XML formatted description of the first page. Tile offsets and byte counts may be 0. Tifffile can read Philips slides. - **Ventana/Roche BIF** slides store tiles and metadata in a BigTIFF container. Tiles may overlap and require stitching based on the TileJointInfo elements in the XMP tag. Volumetric scans are stored using the ImageDepth extension. Tifffile can read BIF and decode individual tiles but does not perform stitching. - **ScanImage** optionally allows corrupted non-BigTIFF files > 2 GB. The values of StripOffsets and StripByteCounts can be recovered using the constant differences of the offsets of IFD and tag values throughout the file. Tifffile can read such files if the image data are stored contiguously in each page. - **GeoTIFF sparse** files allow strip or tile offsets and byte counts to be 0. Such segments are implicitly set to 0 or the NODATA value on reading. Tifffile can read GeoTIFF sparse files. - **Tifffile shaped** files store the array shape and user-provided metadata of multi-dimensional image series in JSON format in the ImageDescription tag of the first page of the series. The format allows multiple series, SubIFDs, sparse segments with zero offset and byte count, and truncated series, where only the first page of a series is present, and the image data are stored contiguously. No other software besides Tifffile supports the truncated format. Other libraries for reading, writing, inspecting, or manipulating scientific TIFF files from Python are `aicsimageio `_, `apeer-ometiff-library `_, `bigtiff `_, `fabio.TiffIO `_, `GDAL `_, `imread `_, `large_image `_, `openslide-python `_, `opentile `_, `pylibtiff `_, `pylsm `_, `pymimage `_, `python-bioformats `_, `pytiff `_, `scanimagetiffreader-python `_, `SimpleITK `_, `slideio `_, `tiffslide `_, `tifftools `_, `tyf `_, `xtiff `_, and `ndtiff `_. References ---------- - TIFF 6.0 Specification and Supplements. Adobe Systems Incorporated. https://www.adobe.io/open/standards/TIFF.html https://download.osgeo.org/libtiff/doc/ - TIFF File Format FAQ. https://www.awaresystems.be/imaging/tiff/faq.html - The BigTIFF File Format. https://www.awaresystems.be/imaging/tiff/bigtiff.html - MetaMorph Stack (STK) Image File Format. http://mdc.custhelp.com/app/answers/detail/a_id/18862 - Image File Format Description LSM 5/7 Release 6.0 (ZEN 2010). Carl Zeiss MicroImaging GmbH. BioSciences. May 10, 2011 - The OME-TIFF format. https://docs.openmicroscopy.org/ome-model/latest/ - UltraQuant(r) Version 6.0 for Windows Start-Up Guide. http://www.ultralum.com/images%20ultralum/pdf/UQStart%20Up%20Guide.pdf - Micro-Manager File Formats. https://micro-manager.org/wiki/Micro-Manager_File_Formats - ScanImage BigTiff Specification. https://docs.scanimage.org/Appendix/ScanImage+BigTiff+Specification.html - ZIF, the Zoomable Image File format. https://zif.photo/ - GeoTIFF File Format https://gdal.org/drivers/raster/gtiff.html - Cloud optimized GeoTIFF. https://github.com/cogeotiff/cog-spec/blob/master/spec.md - Tags for TIFF and Related Specifications. Digital Preservation. https://www.loc.gov/preservation/digital/formats/content/tiff_tags.shtml - CIPA DC-008-2016: Exchangeable image file format for digital still cameras: Exif Version 2.31. http://www.cipa.jp/std/documents/e/DC-008-Translation-2016-E.pdf - The EER (Electron Event Representation) file format. https://github.com/fei-company/EerReaderLib - Digital Negative (DNG) Specification. Version 1.7.1.0, September 2023. https://helpx.adobe.com/content/dam/help/en/photoshop/pdf/DNG_Spec_1_7_1_0.pdf - Roche Digital Pathology. BIF image file format for digital pathology. https://diagnostics.roche.com/content/dam/diagnostics/Blueprint/en/pdf/rmd/Roche-Digital-Pathology-BIF-Whitepaper.pdf - Astro-TIFF specification. https://astro-tiff.sourceforge.io/ - Aperio Technologies, Inc. Digital Slides and Third-Party Data Interchange. Aperio_Digital_Slides_and_Third-party_data_interchange.pdf - PerkinElmer image format. https://downloads.openmicroscopy.org/images/Vectra-QPTIFF/perkinelmer/PKI_Image%20Format.docx - NDTiffStorage. https://github.com/micro-manager/NDTiffStorage - Argos AVS File Format. https://github.com/user-attachments/files/15580286/ARGOS.AVS.File.Format.pdf Examples -------- Write a NumPy array to a single-page RGB TIFF file: >>> data = numpy.random.randint(0, 255, (256, 256, 3), 'uint8') >>> imwrite('temp.tif', data, photometric='rgb') Read the image from the TIFF file as NumPy array: >>> image = imread('temp.tif') >>> image.shape (256, 256, 3) Use the `photometric` and `planarconfig` arguments to write a 3x3x3 NumPy array to an interleaved RGB, a planar RGB, or a 3-page grayscale TIFF: >>> data = numpy.random.randint(0, 255, (3, 3, 3), 'uint8') >>> imwrite('temp.tif', data, photometric='rgb') >>> imwrite('temp.tif', data, photometric='rgb', planarconfig='separate') >>> imwrite('temp.tif', data, photometric='minisblack') Use the `extrasamples` argument to specify how extra components are interpreted, for example, for an RGBA image with unassociated alpha channel: >>> data = numpy.random.randint(0, 255, (256, 256, 4), 'uint8') >>> imwrite('temp.tif', data, photometric='rgb', extrasamples=['unassalpha']) Write a 3-dimensional NumPy array to a multi-page, 16-bit grayscale TIFF file: >>> data = numpy.random.randint(0, 2**12, (64, 301, 219), 'uint16') >>> imwrite('temp.tif', data, photometric='minisblack') Read the whole image stack from the multi-page TIFF file as NumPy array: >>> image_stack = imread('temp.tif') >>> image_stack.shape (64, 301, 219) >>> image_stack.dtype dtype('uint16') Read the image from the first page in the TIFF file as NumPy array: >>> image = imread('temp.tif', key=0) >>> image.shape (301, 219) Read images from a selected range of pages: >>> images = imread('temp.tif', key=range(4, 40, 2)) >>> images.shape (18, 301, 219) Iterate over all pages in the TIFF file and successively read images: >>> with TiffFile('temp.tif') as tif: ... for page in tif.pages: ... image = page.asarray() ... Get information about the image stack in the TIFF file without reading any image data: >>> tif = TiffFile('temp.tif') >>> len(tif.pages) # number of pages in the file 64 >>> page = tif.pages[0] # get shape and dtype of image in first page >>> page.shape (301, 219) >>> page.dtype dtype('uint16') >>> page.axes 'YX' >>> series = tif.series[0] # get shape and dtype of first image series >>> series.shape (64, 301, 219) >>> series.dtype dtype('uint16') >>> series.axes 'QYX' >>> tif.close() Inspect the "XResolution" tag from the first page in the TIFF file: >>> with TiffFile('temp.tif') as tif: ... tag = tif.pages[0].tags['XResolution'] ... >>> tag.value (1, 1) >>> tag.name 'XResolution' >>> tag.code 282 >>> tag.count 1 >>> tag.dtype Iterate over all tags in the TIFF file: >>> with TiffFile('temp.tif') as tif: ... for page in tif.pages: ... for tag in page.tags: ... tag_name, tag_value = tag.name, tag.value ... Overwrite the value of an existing tag, for example, XResolution: >>> with TiffFile('temp.tif', mode='r+') as tif: ... _ = tif.pages[0].tags['XResolution'].overwrite((96000, 1000)) ... Write a 5-dimensional floating-point array using BigTIFF format, separate color components, tiling, Zlib compression level 8, horizontal differencing predictor, and additional metadata: >>> data = numpy.random.rand(2, 5, 3, 301, 219).astype('float32') >>> imwrite( ... 'temp.tif', ... data, ... bigtiff=True, ... photometric='rgb', ... planarconfig='separate', ... tile=(32, 32), ... compression='zlib', ... compressionargs={'level': 8}, ... predictor=True, ... metadata={'axes': 'TZCYX'}, ... ) Write a 10 fps time series of volumes with xyz voxel size 2.6755x2.6755x3.9474 micron^3 to an ImageJ hyperstack formatted TIFF file: >>> volume = numpy.random.randn(6, 57, 256, 256).astype('float32') >>> image_labels = [f'{i}' for i in range(volume.shape[0] * volume.shape[1])] >>> imwrite( ... 'temp.tif', ... volume, ... imagej=True, ... resolution=(1.0 / 2.6755, 1.0 / 2.6755), ... metadata={ ... 'spacing': 3.947368, ... 'unit': 'um', ... 'finterval': 1 / 10, ... 'fps': 10.0, ... 'axes': 'TZYX', ... 'Labels': image_labels, ... }, ... ) Read the volume and metadata from the ImageJ hyperstack file: >>> with TiffFile('temp.tif') as tif: ... volume = tif.asarray() ... axes = tif.series[0].axes ... imagej_metadata = tif.imagej_metadata ... >>> volume.shape (6, 57, 256, 256) >>> axes 'TZYX' >>> imagej_metadata['slices'] 57 >>> imagej_metadata['frames'] 6 Memory-map the contiguous image data in the ImageJ hyperstack file: >>> memmap_volume = memmap('temp.tif') >>> memmap_volume.shape (6, 57, 256, 256) >>> del memmap_volume Create a TIFF file containing an empty image and write to the memory-mapped NumPy array (note: this does not work with compression or tiling): >>> memmap_image = memmap( ... 'temp.tif', shape=(256, 256, 3), dtype='float32', photometric='rgb' ... ) >>> type(memmap_image) >>> memmap_image[255, 255, 1] = 1.0 >>> memmap_image.flush() >>> del memmap_image Write two NumPy arrays to a multi-series TIFF file (note: other TIFF readers will not recognize the two series; use the OME-TIFF format for better interoperability): >>> series0 = numpy.random.randint(0, 255, (32, 32, 3), 'uint8') >>> series1 = numpy.random.randint(0, 255, (4, 256, 256), 'uint16') >>> with TiffWriter('temp.tif') as tif: ... tif.write(series0, photometric='rgb') ... tif.write(series1, photometric='minisblack') ... Read the second image series from the TIFF file: >>> series1 = imread('temp.tif', series=1) >>> series1.shape (4, 256, 256) Successively write the frames of one contiguous series to a TIFF file: >>> data = numpy.random.randint(0, 255, (30, 301, 219), 'uint8') >>> with TiffWriter('temp.tif') as tif: ... for frame in data: ... tif.write(frame, contiguous=True) ... Append an image series to the existing TIFF file (note: this does not work with ImageJ hyperstack or OME-TIFF files): >>> data = numpy.random.randint(0, 255, (301, 219, 3), 'uint8') >>> imwrite('temp.tif', data, photometric='rgb', append=True) Create a TIFF file from a generator of tiles: >>> data = numpy.random.randint(0, 2**12, (31, 33, 3), 'uint16') >>> def tiles(data, tileshape): ... for y in range(0, data.shape[0], tileshape[0]): ... for x in range(0, data.shape[1], tileshape[1]): ... yield data[y : y + tileshape[0], x : x + tileshape[1]] ... >>> imwrite( ... 'temp.tif', ... tiles(data, (16, 16)), ... tile=(16, 16), ... shape=data.shape, ... dtype=data.dtype, ... photometric='rgb', ... ) Write a multi-dimensional, multi-resolution (pyramidal), multi-series OME-TIFF file with optional metadata. Sub-resolution images are written to SubIFDs. Limit parallel encoding to 2 threads. Write a thumbnail image as a separate image series: >>> data = numpy.random.randint(0, 255, (8, 2, 512, 512, 3), 'uint8') >>> subresolutions = 2 >>> pixelsize = 0.29 # micrometer >>> with TiffWriter('temp.ome.tif', bigtiff=True) as tif: ... metadata = { ... 'axes': 'TCYXS', ... 'SignificantBits': 8, ... 'TimeIncrement': 0.1, ... 'TimeIncrementUnit': 's', ... 'PhysicalSizeX': pixelsize, ... 'PhysicalSizeXUnit': 'µm', ... 'PhysicalSizeY': pixelsize, ... 'PhysicalSizeYUnit': 'µm', ... 'Channel': {'Name': ['Channel 1', 'Channel 2']}, ... 'Plane': {'PositionX': [0.0] * 16, 'PositionXUnit': ['µm'] * 16}, ... 'Description': 'A multi-dimensional, multi-resolution image', ... 'MapAnnotation': { # for OMERO ... 'Namespace': 'openmicroscopy.org/PyramidResolution', ... '1': '256 256', ... '2': '128 128', ... }, ... } ... options = dict( ... photometric='rgb', ... tile=(128, 128), ... compression='jpeg', ... resolutionunit='CENTIMETER', ... maxworkers=2, ... ) ... tif.write( ... data, ... subifds=subresolutions, ... resolution=(1e4 / pixelsize, 1e4 / pixelsize), ... metadata=metadata, ... **options, ... ) ... # write pyramid levels to the two subifds ... # in production use resampling to generate sub-resolution images ... for level in range(subresolutions): ... mag = 2 ** (level + 1) ... tif.write( ... data[..., ::mag, ::mag, :], ... subfiletype=1, ... resolution=(1e4 / mag / pixelsize, 1e4 / mag / pixelsize), ... **options, ... ) ... # add a thumbnail image as a separate series ... # it is recognized by QuPath as an associated image ... thumbnail = (data[0, 0, ::8, ::8] >> 2).astype('uint8') ... tif.write(thumbnail, metadata={'Name': 'thumbnail'}) ... Access the image levels in the pyramidal OME-TIFF file: >>> baseimage = imread('temp.ome.tif') >>> second_level = imread('temp.ome.tif', series=0, level=1) >>> with TiffFile('temp.ome.tif') as tif: ... baseimage = tif.series[0].asarray() ... second_level = tif.series[0].levels[1].asarray() ... number_levels = len(tif.series[0].levels) # includes base level ... Iterate over and decode single JPEG compressed tiles in the TIFF file: >>> with TiffFile('temp.ome.tif') as tif: ... fh = tif.filehandle ... for page in tif.pages: ... for index, (offset, bytecount) in enumerate( ... zip(page.dataoffsets, page.databytecounts) ... ): ... _ = fh.seek(offset) ... data = fh.read(bytecount) ... tile, indices, shape = page.decode( ... data, index, jpegtables=page.jpegtables ... ) ... Use Zarr 2 to read parts of the tiled, pyramidal images in the TIFF file: >>> import zarr >>> store = imread('temp.ome.tif', aszarr=True) >>> z = zarr.open(store, mode='r') >>> z >>> z[0] # base layer >>> z[0][2, 0, 128:384, 256:].shape # read a tile from the base layer (256, 256, 3) >>> store.close() Load the base layer from the Zarr 2 store as a dask array: >>> import dask.array >>> store = imread('temp.ome.tif', aszarr=True) >>> dask.array.from_zarr(store, 0) dask.array<...shape=(8, 2, 512, 512, 3)...chunksize=(1, 1, 128, 128, 3)... >>> store.close() Write the Zarr 2 store to a fsspec ReferenceFileSystem in JSON format: >>> store = imread('temp.ome.tif', aszarr=True) >>> store.write_fsspec('temp.ome.tif.json', url='file://') >>> store.close() Open the fsspec ReferenceFileSystem as a Zarr group: >>> import fsspec >>> import imagecodecs.numcodecs >>> imagecodecs.numcodecs.register_codecs() >>> mapper = fsspec.get_mapper( ... 'reference://', fo='temp.ome.tif.json', target_protocol='file' ... ) >>> z = zarr.open(mapper, mode='r') >>> z Create an OME-TIFF file containing an empty, tiled image series and write to it via the Zarr 2 interface (note: this does not work with compression): >>> imwrite( ... 'temp2.ome.tif', ... shape=(8, 800, 600), ... dtype='uint16', ... photometric='minisblack', ... tile=(128, 128), ... metadata={'axes': 'CYX'}, ... ) >>> store = imread('temp2.ome.tif', mode='r+', aszarr=True) >>> z = zarr.open(store, mode='r+') >>> z >>> z[3, 100:200, 200:300:2] = 1024 >>> store.close() Read images from a sequence of TIFF files as NumPy array using two I/O worker threads: >>> imwrite('temp_C001T001.tif', numpy.random.rand(64, 64)) >>> imwrite('temp_C001T002.tif', numpy.random.rand(64, 64)) >>> image_sequence = imread( ... ['temp_C001T001.tif', 'temp_C001T002.tif'], ioworkers=2, maxworkers=1 ... ) >>> image_sequence.shape (2, 64, 64) >>> image_sequence.dtype dtype('float64') Read an image stack from a series of TIFF files with a file name pattern as NumPy or Zarr 2 arrays: >>> image_sequence = TiffSequence('temp_C0*.tif', pattern=r'_(C)(\d+)(T)(\d+)') >>> image_sequence.shape (1, 2) >>> image_sequence.axes 'CT' >>> data = image_sequence.asarray() >>> data.shape (1, 2, 64, 64) >>> store = image_sequence.aszarr() >>> zarr.open(store, mode='r') >>> image_sequence.close() Write the Zarr 2 store to a fsspec ReferenceFileSystem in JSON format: >>> store = image_sequence.aszarr() >>> store.write_fsspec('temp.json', url='file://') Open the fsspec ReferenceFileSystem as a Zarr 2 array: >>> import fsspec >>> import tifffile.numcodecs >>> tifffile.numcodecs.register_codec() >>> mapper = fsspec.get_mapper( ... 'reference://', fo='temp.json', target_protocol='file' ... ) >>> zarr.open(mapper, mode='r') Inspect the TIFF file from the command line:: $ python -m tifffile temp.ome.tif """ from __future__ import annotations __version__ = '2025.3.30' __all__ = [ '__version__', 'TiffFile', 'TiffFileError', 'TiffFrame', 'TiffPage', 'TiffPages', 'TiffPageSeries', 'TiffReader', 'TiffSequence', 'TiffTag', 'TiffTags', 'TiffTagRegistry', 'TiffWriter', 'TiffFormat', 'ZarrFileSequenceStore', 'ZarrStore', 'ZarrTiffStore', 'imread', 'imshow', 'imwrite', 'lsm2bin', 'memmap', 'read_ndtiff_index', 'read_gdal_structural_metadata', 'read_micromanager_metadata', 'read_scanimage_metadata', 'tiff2fsspec', 'tiffcomment', 'TIFF', 'DATATYPE', 'CHUNKMODE', 'COMPRESSION', 'EXTRASAMPLE', 'FILETYPE', 'FILLORDER', 'OFILETYPE', 'ORIENTATION', 'PHOTOMETRIC', 'PLANARCONFIG', 'PREDICTOR', 'RESUNIT', 'SAMPLEFORMAT', 'OmeXml', 'OmeXmlError', 'FileCache', 'FileHandle', 'FileSequence', 'StoredShape', 'TiledSequence', 'NullContext', 'Timer', 'askopenfilename', 'astype', 'create_output', 'enumarg', 'enumstr', 'format_size', 'hexdump', 'imagej_description', 'imagej_metadata_tag', 'logger', 'matlabstr2py', 'natural_sorted', 'nullfunc', 'parse_filenames', 'parse_kwargs', 'pformat', 'product', 'repeat_nd', 'reshape_axes', 'reshape_nd', 'strptime', 'transpose_axes', 'update_kwargs', 'validate_jhove', 'xml2dict', '_TIFF', # private # deprecated 'stripnull', ] import binascii import collections import enum import glob import io import json import logging import math import os import re import struct import sys import threading import time import warnings from collections.abc import ( Callable, Iterable, Mapping, MutableMapping, Sequence, ) from concurrent.futures import ThreadPoolExecutor from datetime import datetime as DateTime from datetime import timedelta as TimeDelta from functools import cached_property import numpy try: import imagecodecs except ImportError: # load pure Python implementation of some codecs try: from . import _imagecodecs as imagecodecs # type: ignore[no-redef] except ImportError: import _imagecodecs as imagecodecs # type: ignore[no-redef] from typing import IO, TYPE_CHECKING, cast, final, overload if TYPE_CHECKING: from collections.abc import ( Collection, Container, ItemsView, Iterator, KeysView, ValuesView, ) from typing import Any, Literal, Optional, TextIO, Union from numpy.typing import ArrayLike, DTypeLike, NDArray ByteOrder = Literal['>', '<'] OutputType = Union[str, IO[bytes], NDArray[Any], None] TagTuple = tuple[ Union[int, str], Union[int, str], Optional[int], Any, bool ] @overload def imread( files: ( str | os.PathLike[Any] | FileHandle | IO[bytes] | Sequence[str | os.PathLike[Any]] | None ) = None, *, selection: Any | None = None, # TODO: type this aszarr: Literal[False] = ..., key: int | slice | Iterable[int] | None = None, series: int | None = None, level: int | None = None, squeeze: bool | None = None, maxworkers: int | None = None, buffersize: int | None = None, mode: Literal['r', 'r+'] | None = None, name: str | None = None, offset: int | None = None, size: int | None = None, pattern: str | None = None, axesorder: Sequence[int] | None = None, categories: dict[str, dict[str, int]] | None = None, imread: Callable[..., NDArray[Any]] | None = None, sort: Callable[..., Any] | bool | None = None, container: str | os.PathLike[Any] | None = None, chunkshape: tuple[int, ...] | None = None, dtype: DTypeLike | None = None, axestiled: dict[int, int] | Sequence[tuple[int, int]] | None = None, ioworkers: int | None = 1, chunkmode: CHUNKMODE | int | str | None = None, fillvalue: int | float | None = None, zattrs: dict[str, Any] | None = None, multiscales: bool | None = None, omexml: str | None = None, out: OutputType = None, out_inplace: bool | None = None, _multifile: bool | None = None, _useframes: bool | None = None, **kwargs: Any, ) -> NDArray[Any]: ... @overload def imread( files: ( str | os.PathLike[Any] | FileHandle | IO[bytes] | Sequence[str | os.PathLike[Any]] | None ) = None, *, selection: Any | None = None, # TODO: type this aszarr: Literal[True], key: int | slice | Iterable[int] | None = None, series: int | None = None, level: int | None = None, squeeze: bool | None = None, maxworkers: int | None = None, buffersize: int | None = None, mode: Literal['r', 'r+'] | None = None, name: str | None = None, offset: int | None = None, size: int | None = None, pattern: str | None = None, axesorder: Sequence[int] | None = None, categories: dict[str, dict[str, int]] | None = None, imread: Callable[..., NDArray[Any]] | None = None, imreadargs: dict[str, Any] | None = None, sort: Callable[..., Any] | bool | None = None, container: str | os.PathLike[Any] | None = None, chunkshape: tuple[int, ...] | None = None, chunkdtype: DTypeLike | None = None, axestiled: dict[int, int] | Sequence[tuple[int, int]] | None = None, ioworkers: int | None = 1, chunkmode: CHUNKMODE | int | str | None = None, fillvalue: int | float | None = None, zattrs: dict[str, Any] | None = None, multiscales: bool | None = None, omexml: str | None = None, out: OutputType = None, out_inplace: bool | None = None, _multifile: bool | None = None, _useframes: bool | None = None, **kwargs: Any, ) -> ZarrTiffStore | ZarrFileSequenceStore: ... @overload def imread( files: ( str | os.PathLike[Any] | FileHandle | IO[bytes] | Sequence[str | os.PathLike[Any]] | None ) = None, *, selection: Any | None = None, # TODO: type this aszarr: bool = False, key: int | slice | Iterable[int] | None = None, series: int | None = None, level: int | None = None, squeeze: bool | None = None, maxworkers: int | None = None, buffersize: int | None = None, mode: Literal['r', 'r+'] | None = None, name: str | None = None, offset: int | None = None, size: int | None = None, pattern: str | None = None, axesorder: Sequence[int] | None = None, categories: dict[str, dict[str, int]] | None = None, imread: Callable[..., NDArray[Any]] | None = None, imreadargs: dict[str, Any] | None = None, sort: Callable[..., Any] | bool | None = None, container: str | os.PathLike[Any] | None = None, chunkshape: tuple[int, ...] | None = None, chunkdtype: DTypeLike | None = None, axestiled: dict[int, int] | Sequence[tuple[int, int]] | None = None, ioworkers: int | None = 1, chunkmode: CHUNKMODE | int | str | None = None, fillvalue: int | float | None = None, zattrs: dict[str, Any] | None = None, multiscales: bool | None = None, omexml: str | None = None, out: OutputType = None, out_inplace: bool | None = None, _multifile: bool | None = None, _useframes: bool | None = None, **kwargs: Any, ) -> NDArray[Any] | ZarrTiffStore | ZarrFileSequenceStore: ... def imread( files: ( str | os.PathLike[Any] | FileHandle | IO[bytes] | Sequence[str | os.PathLike[Any]] | None ) = None, *, selection: Any | None = None, # TODO: type this aszarr: bool = False, key: int | slice | Iterable[int] | None = None, series: int | None = None, level: int | None = None, squeeze: bool | None = None, maxworkers: int | None = None, buffersize: int | None = None, mode: Literal['r', 'r+'] | None = None, name: str | None = None, offset: int | None = None, size: int | None = None, pattern: str | None = None, axesorder: Sequence[int] | None = None, categories: dict[str, dict[str, int]] | None = None, imread: Callable[..., NDArray[Any]] | None = None, imreadargs: dict[str, Any] | None = None, sort: Callable[..., Any] | bool | None = None, container: str | os.PathLike[Any] | None = None, chunkshape: tuple[int, ...] | None = None, chunkdtype: DTypeLike | None = None, axestiled: dict[int, int] | Sequence[tuple[int, int]] | None = None, ioworkers: int | None = 1, chunkmode: CHUNKMODE | int | str | None = None, fillvalue: int | float | None = None, zattrs: dict[str, Any] | None = None, multiscales: bool | None = None, omexml: str | None = None, out: OutputType = None, out_inplace: bool | None = None, _multifile: bool | None = None, _useframes: bool | None = None, **kwargs: Any, ) -> NDArray[Any] | ZarrTiffStore | ZarrFileSequenceStore: """Return image from TIFF file(s) as NumPy array or Zarr 2 store. The first image series in the file(s) is returned by default. Parameters: files: File name, seekable binary stream, glob pattern, or sequence of file names. May be *None* if `container` is specified. selection: Subset of image to be extracted. If not None, a Zarr 2 array is created, indexed with the `selection` value, and returned as a NumPy array. Only segments that are part of the selection will be read from file. Refer to the Zarr 2 documentation for valid selections. Depending on selection size, image size, and storage properties, it may be more efficient to read the whole image from file and then index it. aszarr: Return file sequences, series, or single pages as Zarr 2 store instead of NumPy array if `selection` is None. mode, name, offset, size, omexml, _multifile, _useframes: Passed to :py:class:`TiffFile`. key, series, level, squeeze, maxworkers, buffersize: Passed to :py:meth:`TiffFile.asarray` or :py:meth:`TiffFile.aszarr`. imread, container, sort, pattern, axesorder, axestiled, categories,\ ioworkers: Passed to :py:class:`FileSequence`. chunkmode, fillvalue, zattrs, multiscales: Passed to :py:class:`ZarrTiffStore` or :py:class:`ZarrFileSequenceStore`. chunkshape, chunkdtype: Passed to :py:meth:`FileSequence.asarray` or :py:class:`ZarrFileSequenceStore`. out_inplace: Passed to :py:meth:`FileSequence.asarray` out: Passed to :py:meth:`TiffFile.asarray`, :py:meth:`FileSequence.asarray`, or :py:func:`zarr_selection`. imreadargs: Additional arguments passed to :py:attr:`FileSequence.imread`. **kwargs: Additional arguments passed to :py:class:`TiffFile` or :py:attr:`FileSequence.imread`. Returns: Images from specified files, series, or pages. Zarr 2 store instances must be closed after use. See :py:meth:`TiffPage.asarray` for operations that are applied (or not) to the image data stored in the file. """ store: ZarrStore aszarr = aszarr or (selection is not None) is_flags = parse_kwargs(kwargs, *(k for k in kwargs if k[:3] == 'is_')) if imread is None and kwargs: raise TypeError( 'imread() got unexpected keyword arguments ' + ', '.join(f"'{key}'" for key in kwargs) ) if container is None: if isinstance(files, str) and ('*' in files or '?' in files): files = glob.glob(files) if not files: raise ValueError('no files found') if ( isinstance(files, Sequence) and not isinstance(files, str) and len(files) == 1 ): files = files[0] if isinstance(files, str) or not isinstance(files, Sequence): with TiffFile( files, mode=mode, name=name, offset=offset, size=size, omexml=omexml, _multifile=_multifile, _useframes=_useframes, **is_flags, ) as tif: if aszarr: assert key is None or isinstance(key, int) store = tif.aszarr( key=key, series=series, level=level, squeeze=squeeze, maxworkers=maxworkers, buffersize=buffersize, chunkmode=chunkmode, fillvalue=fillvalue, zattrs=zattrs, multiscales=multiscales, ) if selection is None: return store return zarr_selection(store, selection, out=out) return tif.asarray( key=key, series=series, level=level, squeeze=squeeze, maxworkers=maxworkers, buffersize=buffersize, out=out, ) elif isinstance(files, (FileHandle, IO)): raise ValueError('BinaryIO not supported') imread_kwargs = kwargs_notnone( key=key, series=series, level=level, squeeze=squeeze, maxworkers=maxworkers, buffersize=buffersize, imreadargs=imreadargs, _multifile=_multifile, _useframes=_useframes, **is_flags, **kwargs, ) with TiffSequence( files, pattern=pattern, axesorder=axesorder, categories=categories, container=container, sort=sort, **kwargs_notnone(imread=imread), ) as imseq: if aszarr: store = imseq.aszarr( axestiled=axestiled, chunkmode=chunkmode, chunkshape=chunkshape, chunkdtype=chunkdtype, fillvalue=fillvalue, zattrs=zattrs, **imread_kwargs, ) if selection is None: return store return zarr_selection(store, selection, out=out) return imseq.asarray( axestiled=axestiled, chunkshape=chunkshape, chunkdtype=chunkdtype, ioworkers=ioworkers, out=out, out_inplace=out_inplace, **imread_kwargs, ) def imwrite( file: str | os.PathLike[Any] | FileHandle | IO[bytes], /, data: ( ArrayLike | Iterator[NDArray[Any] | None] | Iterator[bytes] | None ) = None, *, mode: Literal['w', 'x', 'r+'] | None = None, bigtiff: bool | None = None, byteorder: ByteOrder | None = None, imagej: bool = False, ome: bool | None = None, shaped: bool | None = None, append: bool = False, shape: Sequence[int] | None = None, dtype: DTypeLike | None = None, photometric: PHOTOMETRIC | int | str | None = None, planarconfig: PLANARCONFIG | int | str | None = None, extrasamples: Sequence[EXTRASAMPLE | int | str] | None = None, volumetric: bool = False, tile: Sequence[int] | None = None, rowsperstrip: int | None = None, bitspersample: int | None = None, compression: COMPRESSION | int | str | None = None, compressionargs: dict[str, Any] | None = None, predictor: PREDICTOR | int | str | bool | None = None, subsampling: tuple[int, int] | None = None, jpegtables: bytes | None = None, iccprofile: bytes | None = None, colormap: ArrayLike | None = None, description: str | bytes | None = None, datetime: str | bool | DateTime | None = None, resolution: ( tuple[float | tuple[int, int], float | tuple[int, int]] | None ) = None, resolutionunit: RESUNIT | int | str | None = None, subfiletype: FILETYPE | int | None = None, software: str | bytes | bool | None = None, # subifds: int | Sequence[int] | None = None, metadata: dict[str, Any] | None = {}, extratags: Sequence[TagTuple] | None = None, contiguous: bool = False, truncate: bool = False, align: int | None = None, maxworkers: int | None = None, buffersize: int | None = None, returnoffset: bool = False, ) -> tuple[int, int] | None: """Write NumPy array to TIFF file. A BigTIFF file is written if the data size is larger than 4 GB less 32 MB for metadata, and `bigtiff` is not *False*, and `imagej`, `truncate` and `compression` are not enabled. Unless `byteorder` is specified, the TIFF file byte order is determined from the dtype of `data` or the `dtype` argument. Parameters: file: Passed to :py:class:`TiffWriter`. data, shape, dtype: Passed to :py:meth:`TiffWriter.write`. mode, append, byteorder, bigtiff, imagej, ome, shaped: Passed to :py:class:`TiffWriter`. photometric, planarconfig, extrasamples, volumetric, tile,\ rowsperstrip, bitspersample, compression, compressionargs, predictor,\ subsampling, jpegtables, iccprofile, colormap, description, datetime,\ resolution, resolutionunit, subfiletype, software,\ metadata, extratags, maxworkers, buffersize, \ contiguous, truncate, align: Passed to :py:meth:`TiffWriter.write`. returnoffset: Return offset and number of bytes of memory-mappable image data in file. Returns: If `returnoffset` is *True* and the image data in the file are memory-mappable, the offset and number of bytes of the image data in the file. """ if data is None: # write empty file if shape is None or dtype is None: raise ValueError("missing required 'shape' or 'dtype' argument") dtype = numpy.dtype(dtype) shape = tuple(shape) datasize = product(shape) * dtype.itemsize if byteorder is None: byteorder = dtype.byteorder # type: ignore[assignment] else: # try: datasize = data.nbytes # type: ignore[union-attr] if byteorder is None: byteorder = data.dtype.byteorder # type: ignore[union-attr] except Exception: datasize = 0 if bigtiff is None: bigtiff = ( datasize > 2**32 - 2**25 and not imagej and not truncate and compression in {None, 0, 1, 'NONE', 'None', 'none'} ) with TiffWriter( file, mode=mode, bigtiff=bigtiff, byteorder=byteorder, append=append, imagej=imagej, ome=ome, shaped=shaped, ) as tif: result = tif.write( data, shape=shape, dtype=dtype, photometric=photometric, planarconfig=planarconfig, extrasamples=extrasamples, volumetric=volumetric, tile=tile, rowsperstrip=rowsperstrip, bitspersample=bitspersample, compression=compression, compressionargs=compressionargs, predictor=predictor, subsampling=subsampling, jpegtables=jpegtables, iccprofile=iccprofile, colormap=colormap, description=description, datetime=datetime, resolution=resolution, resolutionunit=resolutionunit, subfiletype=subfiletype, software=software, metadata=metadata, extratags=extratags, contiguous=contiguous, truncate=truncate, align=align, maxworkers=maxworkers, buffersize=buffersize, returnoffset=returnoffset, ) return result def memmap( filename: str | os.PathLike[Any], /, *, shape: Sequence[int] | None = None, dtype: DTypeLike | None = None, page: int | None = None, series: int = 0, level: int = 0, mode: Literal['r+', 'r', 'c'] = 'r+', **kwargs: Any, ) -> numpy.memmap[Any, Any]: """Return memory-mapped NumPy array of image data stored in TIFF file. Memory-mapping requires the image data stored in native byte order, without tiling, compression, predictors, etc. If `shape` and `dtype` are provided, existing files are overwritten or appended to depending on the `append` argument. Else, the image data of a specified page or series in an existing file are memory-mapped. By default, the image data of the first series are memory-mapped. Call `flush` to write any changes in the array to the file. Parameters: filename: Name of TIFF file which stores array. shape: Shape of empty array. dtype: Datatype of empty array. page: Index of page which image data to memory-map. series: Index of page series which image data to memory-map. level: Index of pyramid level which image data to memory-map. mode: Memory-map file open mode. The default is 'r+', which opens existing file for reading and writing. **kwargs: Additional arguments passed to :py:func:`imwrite` or :py:class:`TiffFile`. Returns: Image in TIFF file as memory-mapped NumPy array. Raises: ValueError: Image data in TIFF file are not memory-mappable. """ filename = os.fspath(filename) if shape is not None: shape = tuple(shape) if shape is not None and dtype is not None: # create a new, empty array dtype = numpy.dtype(dtype) if 'byteorder' in kwargs: dtype = dtype.newbyteorder(kwargs['byteorder']) kwargs.update( data=None, shape=shape, dtype=dtype, align=TIFF.ALLOCATIONGRANULARITY, returnoffset=True, ) result = imwrite(filename, **kwargs) if result is None: # TODO: fail before creating file or writing data raise ValueError('image data are not memory-mappable') offset = result[0] else: # use existing file with TiffFile(filename, **kwargs) as tif: if page is None: tiffseries = tif.series[series].levels[level] if tiffseries.dataoffset is None: raise ValueError('image data are not memory-mappable') shape = tiffseries.shape dtype = tiffseries.dtype offset = tiffseries.dataoffset else: tiffpage = tif.pages[page] if not tiffpage.is_memmappable: raise ValueError('image data are not memory-mappable') offset = tiffpage.dataoffsets[0] shape = tiffpage.shape dtype = tiffpage.dtype assert dtype is not None dtype = numpy.dtype(tif.byteorder + dtype.char) return numpy.memmap(filename, dtype, mode, offset, shape, 'C') class TiffFileError(Exception): """Exception to indicate invalid TIFF structure.""" @final class TiffWriter: """Write NumPy arrays to TIFF file. TiffWriter's main purpose is saving multi-dimensional NumPy arrays in TIFF containers, not to create any possible TIFF format. Specifically, ExifIFD and GPSIFD tags are not supported. TiffWriter instances must be closed with :py:meth:`TiffWriter.close`, which is automatically called when using the 'with' context manager. TiffWriter instances are not thread-safe. All attributes are read-only. Parameters: file: Specifies file to write. mode: Binary file open mode if `file` is file name. The default is 'w', which opens files for writing, truncating existing files. 'x' opens files for exclusive creation, failing on existing files. 'r+' opens files for updating, enabling `append`. bigtiff: Write 64-bit BigTIFF formatted file, which can exceed 4 GB. By default, a classic 32-bit TIFF file is written, which is limited to 4 GB. If `append` is *True*, the existing file's format is used. byteorder: Endianness of TIFF format. One of '<', '>', '=', or '|'. The default is the system's native byte order. append: If `file` is existing standard TIFF file, append image data and tags to file. Parameters `bigtiff` and `byteorder` set from existing file. Appending does not scale well with the number of pages already in the file and may corrupt specifically formatted TIFF files such as OME-TIFF, LSM, STK, ImageJ, or FluoView. imagej: Write ImageJ hyperstack compatible file if `ome` is not enabled. This format can handle data types uint8, uint16, or float32 and data shapes up to 6 dimensions in TZCYXS order. RGB images (S=3 or S=4) must be `uint8`. ImageJ's default byte order is big-endian, but this implementation uses the system's native byte order by default. ImageJ hyperstacks do not support BigTIFF or compression. The ImageJ file format is undocumented. Use FIJI's Bio-Formats import function for compressed files. ome: Write OME-TIFF compatible file. By default, the OME-TIFF format is used if the file name extension contains '.ome.', `imagej` is not enabled, and the `description` argument in the first call of :py:meth:`TiffWriter.write` is not specified. The format supports multiple, up to 9 dimensional image series. The default axes order is TZC(S)YX(S). Refer to the OME model for restrictions of this format. shaped: Write tifffile "shaped" compatible file. The shape of multi-dimensional images is stored in JSON format in a ImageDescription tag of the first page of a series. This is the default format used by tifffile unless `imagej` or `ome` are enabled or ``metadata=None`` is passed to :py:meth:`TiffWriter.write`. Raises: ValueError: The TIFF file cannot be appended to. Use ``append='force'`` to force appending, which may result in a corrupted file. """ tiff: TiffFormat """Format of TIFF file being written.""" _fh: FileHandle _omexml: OmeXml | None _ome: bool | None # writing OME-TIFF format _imagej: bool # writing ImageJ format _tifffile: bool # writing Tifffile shaped format _truncate: bool _metadata: dict[str, Any] | None _colormap: NDArray[numpy.uint16] | None _tags: list[tuple[int, bytes, Any, bool]] | None _datashape: tuple[int, ...] | None # shape of data in consecutive pages _datadtype: numpy.dtype[Any] | None # data type _dataoffset: int | None # offset to data _databytecounts: list[int] | None # byte counts per plane _dataoffsetstag: int | None # strip or tile offset tag code _descriptiontag: TiffTag | None # TiffTag for updating comment _ifdoffset: int _subifds: int # number of subifds _subifdslevel: int # index of current subifd level _subifdsoffsets: list[int] # offsets to offsets to subifds _nextifdoffsets: list[int] # offsets to offset to next ifd _ifdindex: int # index of current ifd _storedshape: StoredShape | None # normalized shape in consecutive pages def __init__( self, file: str | os.PathLike[Any] | FileHandle | IO[bytes], /, *, mode: Literal['w', 'x', 'r+'] | None = None, bigtiff: bool = False, byteorder: ByteOrder | None = None, append: bool | str = False, imagej: bool = False, ome: bool | None = None, shaped: bool | None = None, ) -> None: if mode in {'r+', 'r+b'} or ( isinstance(file, FileHandle) and file._mode == 'r+b' ): mode = 'r+' append = True if append: # determine if file is an existing TIFF file that can be extended try: with FileHandle(file, mode='rb', size=0) as fh: pos = fh.tell() try: with TiffFile(fh) as tif: if append != 'force' and not tif.is_appendable: raise ValueError( 'cannot append to file containing metadata' ) byteorder = tif.byteorder bigtiff = tif.is_bigtiff self._ifdoffset = cast( int, tif.pages.next_page_offset ) finally: fh.seek(pos) append = True except (OSError, FileNotFoundError): append = False if append: if mode not in {None, 'r+', 'r+b'}: raise ValueError("append mode must be 'r+'") mode = 'r+' elif mode is None: mode = 'w' if byteorder is None or byteorder in {'=', '|'}: byteorder = '<' if sys.byteorder == 'little' else '>' elif byteorder not in {'<', '>'}: raise ValueError(f'invalid byteorder {byteorder}') if byteorder == '<': self.tiff = TIFF.BIG_LE if bigtiff else TIFF.CLASSIC_LE else: self.tiff = TIFF.BIG_BE if bigtiff else TIFF.CLASSIC_BE self._truncate = False self._metadata = None self._colormap = None self._tags = None self._datashape = None self._datadtype = None self._dataoffset = None self._databytecounts = None self._dataoffsetstag = None self._descriptiontag = None self._subifds = 0 self._subifdslevel = -1 self._subifdsoffsets = [] self._nextifdoffsets = [] self._ifdindex = 0 self._omexml = None self._storedshape = None self._fh = FileHandle(file, mode=mode, size=0) if append: self._fh.seek(0, os.SEEK_END) else: assert byteorder is not None self._fh.write(b'II' if byteorder == '<' else b'MM') if bigtiff: self._fh.write(struct.pack(byteorder + 'HHH', 43, 8, 0)) else: self._fh.write(struct.pack(byteorder + 'H', 42)) # first IFD self._ifdoffset = self._fh.tell() self._fh.write(struct.pack(self.tiff.offsetformat, 0)) self._ome = None if ome is None else bool(ome) self._imagej = False if self._ome else bool(imagej) if self._imagej: self._ome = False if self._ome or self._imagej: self._tifffile = False else: self._tifffile = True if shaped is None else bool(shaped) if imagej and bigtiff: warnings.warn( f'{self!r} writing nonconformant BigTIFF ImageJ', UserWarning ) def write( self, data: ( ArrayLike | Iterator[NDArray[Any] | None] | Iterator[bytes] | None ) = None, *, shape: Sequence[int] | None = None, dtype: DTypeLike | None = None, photometric: PHOTOMETRIC | int | str | None = None, planarconfig: PLANARCONFIG | int | str | None = None, extrasamples: Sequence[EXTRASAMPLE | int | str] | None = None, volumetric: bool = False, tile: Sequence[int] | None = None, rowsperstrip: int | None = None, bitspersample: int | None = None, compression: COMPRESSION | int | str | bool | None = None, compressionargs: dict[str, Any] | None = None, predictor: PREDICTOR | int | str | bool | None = None, subsampling: tuple[int, int] | None = None, jpegtables: bytes | None = None, iccprofile: bytes | None = None, colormap: ArrayLike | None = None, description: str | bytes | None = None, datetime: str | bool | DateTime | None = None, resolution: ( tuple[float | tuple[int, int], float | tuple[int, int]] | None ) = None, resolutionunit: RESUNIT | int | str | None = None, subfiletype: FILETYPE | int | None = None, software: str | bytes | bool | None = None, subifds: int | Sequence[int] | None = None, metadata: dict[str, Any] | None = {}, extratags: Sequence[TagTuple] | None = None, contiguous: bool = False, truncate: bool = False, align: int | None = None, maxworkers: int | None = None, buffersize: int | None = None, returnoffset: bool = False, ) -> tuple[int, int] | None: r"""Write multi-dimensional image to series of TIFF pages. Metadata in JSON, ImageJ, or OME-XML format are written to the ImageDescription tag of the first page of a series by default, such that the image can later be read back as an array of the same shape. The values of the ImageWidth, ImageLength, ImageDepth, and SamplesPerPixel tags are inferred from the last dimensions of the data's shape. The value of the SampleFormat tag is inferred from the data's dtype. Image data are written uncompressed in one strip per plane by default. Dimensions higher than 2 to 4 (depending on photometric mode, planar configuration, and volumetric mode) are flattened and written as separate pages. If the data size is zero, write a single page with shape (0, 0). Parameters: data: Specifies image to write. If *None*, an empty image is written, which size and type must be specified using `shape` and `dtype` arguments. This option cannot be used with compression, predictors, packed integers, or bilevel images. A copy of array-like data is made if it is not a C-contiguous numpy or dask array with the same byteorder as the TIFF file. Iterators must yield ndarrays or bytes compatible with the file's byteorder as well as the `shape` and `dtype` arguments. Iterator bytes must be compatible with the `compression`, `predictor`, `subsampling`, and `jpegtables` arguments. If `tile` is specified, iterator items must match the tile shape. Incomplete tiles are zero-padded. Iterators of non-tiled images must yield ndarrays of `shape[1:]` or strips as bytes. Iterators of strip ndarrays are not supported. Writing dask arrays might be excruciatingly slow for arrays with many chunks or files with many segments. (https://github.com/dask/dask/issues/8570). shape: Shape of image to write. The default is inferred from the `data` argument if possible. A ValueError is raised if the value is incompatible with the `data` or other arguments. dtype: NumPy data type of image to write. The default is inferred from the `data` argument if possible. A ValueError is raised if the value is incompatible with the `data` argument. photometric: Color space of image. The default is inferred from the data shape, dtype, and the `colormap` argument. A UserWarning is logged if RGB color space is auto-detected. Specify this parameter to silence the warning and to avoid ambiguities. *MINISBLACK*: for bilevel and grayscale images, 0 is black. *MINISWHITE*: for bilevel and grayscale images, 0 is white. *RGB*: the image contains red, green and blue samples. *SEPARATED*: the image contains CMYK samples. *PALETTE*: the image is used as an index into a colormap. *CFA*: the image is a Color Filter Array. The CFARepeatPatternDim, CFAPattern, and other DNG or TIFF/EP tags must be specified in `extratags` to produce a valid file. The value is written to the PhotometricInterpretation tag. planarconfig: Specifies if samples are stored interleaved or in separate planes. *CONTIG*: the last dimension contains samples. *SEPARATE*: the 3rd or 4th last dimension contains samples. The default is inferred from the data shape and `photometric` mode. If this parameter is set, extra samples are used to store grayscale images. The value is written to the PlanarConfiguration tag. extrasamples: Interpretation of extra components in pixels. *UNSPECIFIED*: no transparency information (default). *ASSOCALPHA*: true transparency with premultiplied color. *UNASSALPHA*: independent transparency masks. The values are written to the ExtraSamples tag. volumetric: Write volumetric image to single page (instead of multiple pages) using SGI ImageDepth tag. The volumetric format is not part of the TIFF specification, and few software can read it. OME and ImageJ formats are not compatible with volumetric storage. tile: Shape ([depth,] length, width) of image tiles to write. By default, image data are written in strips. The tile length and width must be a multiple of 16. If a tile depth is provided, the SGI ImageDepth and TileDepth tags are used to write volumetric data. Tiles cannot be used to write contiguous series, except if the tile shape matches the data shape. The values are written to the TileWidth, TileLength, and TileDepth tags. rowsperstrip: Number of rows per strip. By default, strips are about 256 KB if `compression` is enabled, else rowsperstrip is set to the image length. The value is written to the RowsPerStrip tag. bitspersample: Number of bits per sample. The default is the number of bits of the data's dtype. Different values per samples are not supported. Unsigned integer data are packed into bytes as tightly as possible. Valid values are 1-8 for uint8, 9-16 for uint16, and 17-32 for uint32. This setting cannot be used with compression, contiguous series, or empty files. The value is written to the BitsPerSample tag. compression: Compression scheme used on image data. By default, image data are written uncompressed. Compression cannot be used to write contiguous series. Compressors may require certain data shapes, types or value ranges. For example, JPEG compression requires grayscale or RGB(A), uint8 or 12-bit uint16. JPEG compression is experimental. JPEG markers and TIFF tags may not match. Only a limited set of compression schemes are implemented. 'ZLIB' is short for ADOBE_DEFLATE. The value is written to the Compression tag. compressionargs: Extra arguments passed to compression codec, for example, compression level. Refer to the Imagecodecs implementation for supported arguments. predictor: Horizontal differencing operator applied to image data before compression. By default, no operator is applied. Predictors can only be used with certain compression schemes and data types. The value is written to the Predictor tag. subsampling: Horizontal and vertical subsampling factors used for the chrominance components of images: (1, 1), (2, 1), (2, 2), or (4, 1). The default is *(2, 2)*. Currently applies to JPEG compression of RGB images only. Images are stored in YCbCr color space, the value of the PhotometricInterpretation tag is *YCBCR*. Segment widths must be a multiple of 8 times the horizontal factor. Segment lengths and rowsperstrip must be a multiple of 8 times the vertical factor. The values are written to the YCbCrSubSampling tag. jpegtables: JPEG quantization and/or Huffman tables. Use for copying pre-compressed JPEG segments. The value is written to the JPEGTables tag. iccprofile: International Color Consortium (ICC) device profile characterizing image color space. The value is written verbatim to the InterColorProfile tag. colormap: RGB color values for corresponding data value. The colormap array must be of shape `(3, 2\*\*(data.itemsize*8))` (or `(3, 256)` for ImageJ) and dtype uint16. The image's data type must be uint8 or uint16 (or float32 for ImageJ) and the values are indices into the last dimension of the colormap. The value is written to the ColorMap tag. description: Subject of image. Must be 7-bit ASCII. Cannot be used with the ImageJ or OME formats. The value is written to the ImageDescription tag of the first page of a series. datetime: Date and time of image creation in ``%Y:%m:%d %H:%M:%S`` format or datetime object. If *True*, the current date and time is used. The value is written to the DateTime tag of the first page of a series. resolution: Number of pixels per `resolutionunit` in X and Y directions as float or rational numbers. The default is (1.0, 1.0). The values are written to the YResolution and XResolution tags. resolutionunit: Unit of measurement for `resolution` values. The default is *NONE* if `resolution` is not specified and for ImageJ format, else *INCH*. The value is written to the ResolutionUnit tags. subfiletype: Bitfield to indicate kind of image. Set bit 0 if the image is a reduced-resolution version of another image. Set bit 1 if the image is part of a multi-page image. Set bit 2 if the image is transparency mask for another image (photometric must be MASK, SamplesPerPixel and bitspersample must be 1). software: Name of software used to create file. Must be 7-bit ASCII. The default is 'tifffile.py'. Unless *False*, the value is written to the Software tag of the first page of a series. subifds: Number of child IFDs. If greater than 0, the following `subifds` number of series are written as child IFDs of the current series. The number of IFDs written for each SubIFD level must match the number of IFDs written for the current series. All pages written to a certain SubIFD level of the current series must have the same hash. SubIFDs cannot be used with truncated or ImageJ files. SubIFDs in OME-TIFF files must be sub-resolutions of the main IFDs. metadata: Additional metadata describing image, written along with shape information in JSON, OME-XML, or ImageJ formats in ImageDescription or IJMetadata tags. If *None*, or the `shaped` argument to :py:class:`TiffWriter` is *False*, no information in JSON format is written to the ImageDescription tag. The 'axes' item defines the character codes for dimensions in `data` or `shape`. Refer to :py:class:`OmeXml` for supported keys when writing OME-TIFF. Refer to :py:func:`imagej_description` and :py:func:`imagej_metadata_tag` for items supported by the ImageJ format. Items 'Info', 'Labels', 'Ranges', 'LUTs', 'Plot', 'ROI', and 'Overlays' are written to the IJMetadata and IJMetadataByteCounts tags. Strings must be 7-bit ASCII. Written with the first page of a series only. extratags: Additional tags to write. A list of tuples with 5 items: 0. code (int): Tag Id. 1. dtype (:py:class:`DATATYPE`): Data type of items in `value`. 2. count (int): Number of data values. Not used for string or bytes values. 3. value (Sequence[Any]): `count` values compatible with `dtype`. Bytes must contain count values of dtype packed as binary data. 4. writeonce (bool): If *True*, write tag to first page of a series only. Duplicate and select tags in TIFF.TAG_FILTERED are not written if the extratag is specified by integer code. Extratags cannot be used to write IFD type tags. contiguous: If *False* (default), write data to a new series. If *True* and the data and arguments are compatible with previous written ones (same shape, no compression, etc.), the image data are stored contiguously after the previous one. In that case, `photometric`, `planarconfig`, and `rowsperstrip` are ignored. Metadata such as `description`, `metadata`, `datetime`, and `extratags` are written to the first page of a contiguous series only. Contiguous mode cannot be used with the OME or ImageJ formats. truncate: If *True*, only write first page of contiguous series if possible (uncompressed, contiguous, not tiled). Other TIFF readers will only be able to read part of the data. Cannot be used with the OME or ImageJ formats. align: Byte boundary on which to align image data in file. The default is 16. Use mmap.ALLOCATIONGRANULARITY for memory-mapped data. Following contiguous writes are not aligned. maxworkers: Maximum number of threads to concurrently compress tiles or strips. If *None* or *0*, use up to :py:attr:`_TIFF.MAXWORKERS` CPU cores for compressing large segments. Using multiple threads can significantly speed up this function if the bottleneck is encoding the data, for example, in case of large JPEG compressed tiles. If the bottleneck is I/O or pure Python code, using multiple threads might be detrimental. buffersize: Approximate number of bytes to compress in one pass. The default is :py:attr:`_TIFF.BUFFERSIZE` * 2. returnoffset: Return offset and number of bytes of memory-mappable image data in file. Returns: If `returnoffset` is *True* and the image data in the file are memory-mappable, return the offset and number of bytes of the image data in the file. """ # TODO: refactor this function fh: FileHandle storedshape: StoredShape = StoredShape(frames=-1) byteorder: Literal['>', '<'] inputshape: tuple[int, ...] datashape: tuple[int, ...] dataarray: NDArray[Any] | None = None dataiter: Iterator[NDArray[Any] | bytes | None] | None = None dataoffsetsoffset: tuple[int, int | None] | None = None databytecountsoffset: tuple[int, int | None] | None = None subifdsoffsets: tuple[int, int | None] | None = None datadtype: numpy.dtype[Any] bilevel: bool tiles: tuple[int, ...] ifdpos: int photometricsamples: int pos: int | None = None predictortag: int predictorfunc: Callable[..., Any] | None = None compressiontag: int compressionfunc: Callable[..., Any] | None = None tags: list[tuple[int, bytes, bytes | None, bool]] numtiles: int numstrips: int fh = self._fh byteorder = self.tiff.byteorder if data is None: # empty if shape is None or dtype is None: raise ValueError( "missing required 'shape' or 'dtype' arguments" ) dataarray = None dataiter = None datashape = tuple(shape) datadtype = numpy.dtype(dtype).newbyteorder(byteorder) elif hasattr(data, '__next__'): # iterator/generator if shape is None or dtype is None: raise ValueError( "missing required 'shape' or 'dtype' arguments" ) dataiter = data # type: ignore[assignment] datashape = tuple(shape) datadtype = numpy.dtype(dtype).newbyteorder(byteorder) elif hasattr(data, 'dtype'): # numpy, zarr, or dask array data = cast(numpy.ndarray, data) # type: ignore[type-arg] dataarray = data datadtype = numpy.dtype(data.dtype).newbyteorder(byteorder) if not hasattr(data, 'reshape'): # zarr array cannot be shape-normalized dataarray = numpy.asarray(data, datadtype, 'C') else: try: # numpy array must be C contiguous if data.flags.f_contiguous: dataarray = numpy.asarray(data, datadtype, 'C') except AttributeError: # not a numpy array pass datashape = dataarray.shape dataiter = None if dtype is not None and numpy.dtype(dtype) != datadtype: raise ValueError( f'dtype argument {dtype!r} does not match ' f'data dtype {datadtype}' ) if shape is not None and shape != dataarray.shape: raise ValueError( f'shape argument {shape!r} does not match ' f'data shape {dataarray.shape}' ) else: # scalar, list, tuple, etc # if dtype is not specified, default to float64 datadtype = numpy.dtype(dtype).newbyteorder(byteorder) dataarray = numpy.asarray(data, datadtype, 'C') datashape = dataarray.shape dataiter = None del data if any(size >= 4294967296 for size in datashape): raise ValueError('invalid data shape') bilevel = datadtype.char == '?' if bilevel: index = -1 if datashape[-1] > 1 else -2 datasize = product(datashape[:index]) if datashape[index] % 8: datasize *= datashape[index] // 8 + 1 else: datasize *= datashape[index] // 8 else: datasize = product(datashape) * datadtype.itemsize if datasize == 0: dataarray = None compression = False bitspersample = None if metadata is not None: truncate = True if ( not compression or ( not isinstance(compression, bool) # because True == 1 and compression in ('NONE', 'None', 'none', 1) ) or ( isinstance(compression, (tuple, list)) and compression[0] in (None, 0, 1, 'NONE', 'None', 'none') ) ): compression = False if not predictor or ( not isinstance(predictor, bool) # because True == 1 and predictor in {'NONE', 'None', 'none', 1} ): predictor = False inputshape = datashape packints = ( bitspersample is not None and bitspersample != datadtype.itemsize * 8 ) # just append contiguous data if possible if self._datashape is not None and self._datadtype is not None: if colormap is not None: colormap = numpy.asarray(colormap, dtype=byteorder + 'H') if ( not contiguous or self._datashape[1:] != datashape or self._datadtype != datadtype or (colormap is None and self._colormap is not None) or (self._colormap is None and colormap is not None) or not numpy.array_equal( colormap, self._colormap # type: ignore[arg-type] ) ): # incompatible shape, dtype, or colormap self._write_remaining_pages() if self._imagej: raise ValueError( 'the ImageJ format does not support ' 'non-contiguous series' ) if self._omexml is not None: if self._subifdslevel < 0: # add image to OME-XML assert self._storedshape is not None assert self._metadata is not None self._omexml.addimage( dtype=self._datadtype, shape=self._datashape[ 0 if self._datashape[0] != 1 else 1 : ], storedshape=self._storedshape.shape, **self._metadata, ) elif metadata is not None: self._write_image_description() # description might have been appended to file fh.seek(0, os.SEEK_END) if self._subifds: if self._truncate or truncate: raise ValueError( 'SubIFDs cannot be used with truncated series' ) self._subifdslevel += 1 if self._subifdslevel == self._subifds: # done with writing SubIFDs self._nextifdoffsets = [] self._subifdsoffsets = [] self._subifdslevel = -1 self._subifds = 0 self._ifdindex = 0 elif subifds: raise ValueError( 'SubIFDs in SubIFDs are not supported' ) self._datashape = None self._colormap = None elif compression or packints or tile: raise ValueError( 'contiguous mode cannot be used with compression or tiles' ) else: # consecutive mode # write all data, write IFDs/tags later self._datashape = (self._datashape[0] + 1,) + datashape offset = fh.tell() if dataarray is None: fh.write_empty(datasize) else: fh.write_array(dataarray, datadtype) if returnoffset: return offset, datasize return None if self._ome is None: if description is None: self._ome = '.ome.' in fh.extension else: self._ome = False if self._tifffile or self._imagej: self._truncate = bool(truncate) elif truncate: raise ValueError( 'truncate can only be used with imagej or shaped formats' ) else: self._truncate = False if self._truncate and (compression or packints or tile): raise ValueError( 'truncate cannot be used with compression, packints, or tiles' ) if datasize == 0: # write single placeholder TiffPage for arrays with size=0 datashape = (0, 0) warnings.warn( f'{self!r} writing zero-size array to nonconformant TIFF', UserWarning, ) # TODO: reconsider this # raise ValueError('cannot save zero size array') tagnoformat = self.tiff.tagnoformat offsetformat = self.tiff.offsetformat offsetsize = self.tiff.offsetsize tagsize = self.tiff.tagsize MINISBLACK = PHOTOMETRIC.MINISBLACK MINISWHITE = PHOTOMETRIC.MINISWHITE RGB = PHOTOMETRIC.RGB YCBCR = PHOTOMETRIC.YCBCR PALETTE = PHOTOMETRIC.PALETTE CONTIG = PLANARCONFIG.CONTIG SEPARATE = PLANARCONFIG.SEPARATE # parse input if photometric is not None: photometric = enumarg(PHOTOMETRIC, photometric) if planarconfig: planarconfig = enumarg(PLANARCONFIG, planarconfig) if extrasamples is not None: # TODO: deprecate non-sequence extrasamples extrasamples = tuple( int(enumarg(EXTRASAMPLE, x)) for x in sequence(extrasamples) ) if compressionargs is None: compressionargs = {} if compression: if isinstance(compression, (tuple, list)): # TODO: unreachable raise TypeError( "passing multiple values to the 'compression' " "parameter was deprecated in 2022.7.28. " "Use 'compressionargs' to pass extra arguments to the " "compression codec.", ) if isinstance(compression, str): compression = compression.upper() if compression == 'ZLIB': compression = 8 # ADOBE_DEFLATE elif isinstance(compression, bool): compression = 8 # ADOBE_DEFLATE compressiontag = enumarg(COMPRESSION, compression).value compression = True else: compressiontag = 1 compression = False if compressiontag == 1: compressionargs = {} elif compressiontag in {33003, 33004, 33005, 34712}: # JPEG2000: use J2K instead of JP2 compressionargs['codecformat'] = 0 # OPJ_CODEC_J2K assert compressionargs is not None if predictor: if not compression: raise ValueError('cannot use predictor without compression') if compressiontag in TIFF.IMAGE_COMPRESSIONS: # don't use predictor with JPEG, JPEG2000, WEBP, PNG, ... raise ValueError( 'cannot use predictor with ' f'{COMPRESSION(compressiontag)!r}' ) if isinstance(predictor, bool): if datadtype.kind == 'f': predictortag = 3 elif datadtype.kind in 'iu' and datadtype.itemsize <= 4: predictortag = 2 else: raise ValueError( f'cannot use predictor with {datadtype!r}' ) else: predictor = enumarg(PREDICTOR, predictor) if ( datadtype.kind in 'iu' and predictor.value not in {2, 34892, 34893} and datadtype.itemsize <= 4 ) or ( datadtype.kind == 'f' and predictor.value not in {3, 34894, 34895} ): raise ValueError( f'cannot use {predictor!r} with {datadtype!r}' ) predictortag = predictor.value else: predictortag = 1 del predictor predictorfunc = TIFF.PREDICTORS[predictortag] if self._ome: if description is not None: warnings.warn( f'{self!r} not writing description to OME-TIFF', UserWarning, ) description = None if self._omexml is None: if metadata is None: self._omexml = OmeXml() else: self._omexml = OmeXml(**metadata) if volumetric or (tile and len(tile) > 2): raise ValueError('OME-TIFF does not support ImageDepth') volumetric = False elif self._imagej: # if tile is not None or predictor or compression: # warnings.warn( # f'{self!r} the ImageJ format does not support ' # 'tiles, predictors, compression' # ) if description is not None: warnings.warn( f'{self!r} not writing description to ImageJ file', UserWarning, ) description = None if datadtype.char not in 'BHhf': raise ValueError( 'the ImageJ format does not support data type ' f'{datadtype.char!r}' ) if volumetric or (tile and len(tile) > 2): raise ValueError( 'the ImageJ format does not support ImageDepth' ) volumetric = False ijrgb = photometric == RGB if photometric else None if datadtype.char != 'B': if photometric == RGB: raise ValueError( 'the ImageJ format does not support ' f'data type {datadtype!r} for RGB' ) ijrgb = False if colormap is not None: ijrgb = False if metadata is None: axes = None else: axes = metadata.get('axes', None) ijshape = imagej_shape(datashape, rgb=ijrgb, axes=axes) if planarconfig == SEPARATE: raise ValueError( 'the ImageJ format does not support planar samples' ) if ijshape[-1] in {3, 4}: photometric = RGB elif photometric is None: if colormap is not None and datadtype.char == 'B': photometric = PALETTE else: photometric = MINISBLACK planarconfig = None planarconfig = CONTIG if ijrgb else None # verify colormap and indices if colormap is not None: colormap = numpy.asarray(colormap, dtype=byteorder + 'H') self._colormap = colormap if self._imagej: if colormap.shape != (3, 256): raise ValueError('invalid colormap shape for ImageJ') if datadtype.char == 'B' and photometric in { MINISBLACK, MINISWHITE, }: photometric = PALETTE elif not ( (datadtype.char == 'B' and photometric == PALETTE) or ( datadtype.char in 'Hf' and photometric in {MINISBLACK, MINISWHITE} ) ): warnings.warn( f'{self!r} not writing colormap to ImageJ image with ' f'dtype={datadtype} and {photometric=}', UserWarning, ) colormap = None elif photometric is None and datadtype.char in 'BH': photometric = PALETTE planarconfig = None if colormap.shape != (3, 2 ** (datadtype.itemsize * 8)): raise ValueError('invalid colormap shape') elif photometric == PALETTE: planarconfig = None if datadtype.char not in 'BH': raise ValueError('invalid data dtype for palette-image') if colormap.shape != (3, 2 ** (datadtype.itemsize * 8)): raise ValueError('invalid colormap shape') else: warnings.warn( f'{self!r} not writing colormap with image of ' f'dtype={datadtype} and {photometric=}', UserWarning, ) colormap = None if tile: # verify tile shape if ( not 1 < len(tile) < 4 or tile[-1] % 16 or tile[-2] % 16 or any(i < 1 for i in tile) ): raise ValueError(f'invalid tile shape {tile}') tile = tuple(int(i) for i in tile) if volumetric and len(tile) == 2: tile = (1,) + tile volumetric = len(tile) == 3 else: tile = () volumetric = bool(volumetric) assert isinstance(tile, tuple) # for mypy # normalize data shape to 5D or 6D, depending on volume: # (pages, separate_samples, [depth,] length, width, contig_samples) shape = reshape_nd( datashape, TIFF.PHOTOMETRIC_SAMPLES.get( photometric, 2 # type: ignore[arg-type] ), ) ndim = len(shape) if volumetric and ndim < 3: volumetric = False if photometric is None: deprecate = False photometric = MINISBLACK if bilevel: photometric = MINISWHITE elif planarconfig == CONTIG: if ndim > 2 and shape[-1] in {3, 4}: photometric = RGB deprecate = datadtype.char not in 'BH' elif planarconfig == SEPARATE: if volumetric and ndim > 3 and shape[-4] in {3, 4}: photometric = RGB deprecate = True elif ndim > 2 and shape[-3] in {3, 4}: photometric = RGB deprecate = True elif ndim > 2 and shape[-1] in {3, 4}: photometric = RGB planarconfig = CONTIG deprecate = datadtype.char not in 'BH' elif self._imagej or self._ome: photometric = MINISBLACK planarconfig = None elif volumetric and ndim > 3 and shape[-4] in {3, 4}: photometric = RGB planarconfig = SEPARATE deprecate = True elif ndim > 2 and shape[-3] in {3, 4}: photometric = RGB planarconfig = SEPARATE deprecate = True if deprecate: if planarconfig == CONTIG: msg = 'contiguous samples', 'parameter is' else: msg = ( 'separate component planes', "and 'planarconfig' parameters are", ) warnings.warn( f" data with shape {datashape} " f"and dtype '{datadtype}' are stored as RGB with {msg[0]}." " Future versions will store such data as MINISBLACK in " "separate pages by default, unless the 'photometric' " f"{msg[1]} specified.", DeprecationWarning, stacklevel=2, ) del msg del deprecate del datashape assert photometric is not None photometricsamples = TIFF.PHOTOMETRIC_SAMPLES[photometric] if planarconfig and len(shape) <= (3 if volumetric else 2): # TODO: raise error? planarconfig = None if photometricsamples > 1: photometric = MINISBLACK if photometricsamples > 1: if len(shape) < 3: raise ValueError(f'not a {photometric!r} image') if len(shape) < 4: volumetric = False if planarconfig is None: if photometric == RGB: samples_set = {photometricsamples, 4} # allow common alpha else: samples_set = {photometricsamples} if shape[-1] in samples_set: planarconfig = CONTIG elif shape[-4 if volumetric else -3] in samples_set: planarconfig = SEPARATE elif shape[-1] > shape[-4 if volumetric else -3]: # TODO: deprecated this? planarconfig = SEPARATE else: planarconfig = CONTIG if planarconfig == CONTIG: storedshape.contig_samples = shape[-1] storedshape.width = shape[-2] storedshape.length = shape[-3] if volumetric: storedshape.depth = shape[-4] else: storedshape.width = shape[-1] storedshape.length = shape[-2] if volumetric: storedshape.depth = shape[-3] storedshape.separate_samples = shape[-4] else: storedshape.separate_samples = shape[-3] if storedshape.samples > photometricsamples: storedshape.extrasamples = ( storedshape.samples - photometricsamples ) elif photometric == PHOTOMETRIC.CFA: if len(shape) != 2: raise ValueError('invalid CFA image') volumetric = False planarconfig = None storedshape.width = shape[-1] storedshape.length = shape[-2] # if all(et[0] != 50706 for et in extratags): # raise ValueError('must specify DNG tags for CFA image') elif planarconfig and len(shape) > (3 if volumetric else 2): if planarconfig == CONTIG: if extrasamples is None or len(extrasamples) > 0: # use extrasamples storedshape.contig_samples = shape[-1] storedshape.width = shape[-2] storedshape.length = shape[-3] if volumetric: storedshape.depth = shape[-4] else: planarconfig = None storedshape.contig_samples = 1 storedshape.width = shape[-1] storedshape.length = shape[-2] if volumetric: storedshape.depth = shape[-3] else: storedshape.width = shape[-1] storedshape.length = shape[-2] if extrasamples is None or len(extrasamples) > 0: # use extrasamples if volumetric: storedshape.depth = shape[-3] storedshape.separate_samples = shape[-4] else: storedshape.separate_samples = shape[-3] else: planarconfig = None storedshape.separate_samples = 1 if volumetric: storedshape.depth = shape[-3] storedshape.extrasamples = storedshape.samples - 1 else: # photometricsamples == 1 planarconfig = None if self._tifffile and (metadata or metadata == {}): # remove trailing 1s in shaped series while len(shape) > 2 and shape[-1] == 1: shape = shape[:-1] elif self._imagej and len(shape) > 2 and shape[-1] == 1: # TODO: remove this and sync with ImageJ shape shape = shape[:-1] if len(shape) < 3: volumetric = False if not extrasamples: storedshape.width = shape[-1] storedshape.length = shape[-2] if volumetric: storedshape.depth = shape[-3] else: storedshape.contig_samples = shape[-1] storedshape.width = shape[-2] storedshape.length = shape[-3] if volumetric: storedshape.depth = shape[-4] storedshape.extrasamples = storedshape.samples - 1 if not volumetric and tile and len(tile) == 3 and tile[0] > 1: raise ValueError( f' cannot write {storedshape!r} ' f'using volumetric tiles {tile}' ) if subfiletype is not None and subfiletype & 0b100: # FILETYPE_MASK if not ( bilevel and storedshape.samples == 1 and photometric in {0, 1, 4} ): raise ValueError('invalid SubfileType MASK') photometric = PHOTOMETRIC.MASK packints = False if bilevel: if bitspersample is not None and bitspersample != 1: raise ValueError(f'{bitspersample=} must be 1 for bilevel') bitspersample = 1 elif compressiontag in {6, 7, 34892, 33007}: # JPEG # TODO: add bitspersample to compressionargs? if bitspersample is None: if 'bitspersample' in compressionargs: bitspersample = compressionargs['bitspersample'] else: bitspersample = 12 if datadtype == 'uint16' else 8 if not 2 <= bitspersample <= 16: raise ValueError( f'{bitspersample=} invalid for JPEG compression' ) elif compressiontag in {33003, 33004, 33005, 34712, 50002, 52546}: # JPEG2K, JPEGXL # TODO: unify with JPEG? if bitspersample is None: if 'bitspersample' in compressionargs: bitspersample = compressionargs['bitspersample'] else: bitspersample = datadtype.itemsize * 8 if not ( bitspersample > {1: 0, 2: 8, 4: 16}[datadtype.itemsize] and bitspersample <= datadtype.itemsize * 8 ): raise ValueError( f'{bitspersample=} out of range of {datadtype=}' ) elif bitspersample is None: bitspersample = datadtype.itemsize * 8 elif ( datadtype.kind != 'u' or datadtype.itemsize > 4 ) and bitspersample != datadtype.itemsize * 8: raise ValueError(f'{bitspersample=} does not match {datadtype=}') elif not ( bitspersample > {1: 0, 2: 8, 4: 16}[datadtype.itemsize] and bitspersample <= datadtype.itemsize * 8 ): raise ValueError(f'{bitspersample=} out of range of {datadtype=}') elif compression: if bitspersample != datadtype.itemsize * 8: raise ValueError( f'{bitspersample=} cannot be used with compression' ) elif bitspersample != datadtype.itemsize * 8: packints = True if storedshape.frames == -1: s0 = storedshape.page_size storedshape.frames = 1 if s0 == 0 else product(inputshape) // s0 if datasize > 0 and not storedshape.is_valid: raise RuntimeError(f'invalid {storedshape!r}') if photometric == PALETTE: if storedshape.samples != 1 or storedshape.extrasamples > 0: raise ValueError(f'invalid {storedshape!r} for palette mode') elif storedshape.samples < photometricsamples: raise ValueError( f'not enough samples for {photometric!r}: ' f'expected {photometricsamples}, got {storedshape.samples}' ) if ( planarconfig is not None and storedshape.planarconfig != planarconfig ): raise ValueError( f'{planarconfig!r} does not match {storedshape!r}' ) del planarconfig if dataarray is not None: dataarray = dataarray.reshape(storedshape.shape) tags = [] # list of (code, ifdentry, ifdvalue, writeonce) if tile: tagbytecounts = 325 # TileByteCounts tagoffsets = 324 # TileOffsets else: tagbytecounts = 279 # StripByteCounts tagoffsets = 273 # StripOffsets self._dataoffsetstag = tagoffsets pack = self._pack addtag = self._addtag if extratags is None: extratags = () if description is not None: # ImageDescription: user provided description addtag(tags, 270, 2, 0, description, True) # write shape and metadata to ImageDescription self._metadata = {} if not metadata else metadata.copy() if self._omexml is not None: if len(self._omexml.images) == 0: # rewritten later at end of file description = '\x00\x00\x00\x00' else: description = None elif self._imagej: ijmetadata = parse_kwargs( self._metadata, 'Info', 'Labels', 'Ranges', 'LUTs', 'Plot', 'ROI', 'Overlays', 'Properties', 'info', 'labels', 'ranges', 'luts', 'plot', 'roi', 'overlays', 'prop', ) for t in imagej_metadata_tag(ijmetadata, byteorder): addtag(tags, *t) description = imagej_description( inputshape, rgb=storedshape.contig_samples in {3, 4}, colormaped=self._colormap is not None, **self._metadata, ) description += '\x00' * 64 # add buffer for in-place update elif self._tifffile and (metadata or metadata == {}): if self._truncate: self._metadata.update(truncated=True) description = shaped_description(inputshape, **self._metadata) description += '\x00' * 16 # add buffer for in-place update # elif metadata is None and self._truncate: # raise ValueError('cannot truncate without writing metadata') elif description is not None: if not isinstance(description, bytes): description = description.encode('ascii') self._descriptiontag = TiffTag( self, 0, 270, 2, len(description), description, 0 ) description = None if description is None: # disable shaped format if user disabled metadata self._tifffile = False else: description = description.encode('ascii') addtag(tags, 270, 2, 0, description, True) self._descriptiontag = TiffTag( self, 0, 270, 2, len(description), description, 0 ) del description if software is None: software = 'tifffile.py' if software: addtag(tags, 305, 2, 0, software, True) if datetime: if isinstance(datetime, str): if len(datetime) != 19 or datetime[16] != ':': raise ValueError('invalid datetime string') elif isinstance(datetime, DateTime): datetime = datetime.strftime('%Y:%m:%d %H:%M:%S') else: datetime = DateTime.now().strftime('%Y:%m:%d %H:%M:%S') addtag(tags, 306, 2, 0, datetime, True) addtag(tags, 259, 3, 1, compressiontag) # Compression if compressiontag == 34887: # LERC if 'compression' not in compressionargs: lerc_compression = 0 elif compressionargs['compression'] is None: lerc_compression = 0 elif compressionargs['compression'] == 'deflate': lerc_compression = 1 elif compressionargs['compression'] == 'zstd': lerc_compression = 2 else: raise ValueError( 'invalid LERC compression ' f'{compressionargs["compression"]!r}' ) addtag(tags, 50674, 4, 2, (4, lerc_compression)) del lerc_compression if predictortag != 1: addtag(tags, 317, 3, 1, predictortag) addtag(tags, 256, 4, 1, storedshape.width) # ImageWidth addtag(tags, 257, 4, 1, storedshape.length) # ImageLength if tile: addtag(tags, 322, 4, 1, tile[-1]) # TileWidth addtag(tags, 323, 4, 1, tile[-2]) # TileLength if volumetric: addtag(tags, 32997, 4, 1, storedshape.depth) # ImageDepth if tile: addtag(tags, 32998, 4, 1, tile[0]) # TileDepth if subfiletype is not None: addtag(tags, 254, 4, 1, subfiletype) # NewSubfileType if (subifds or self._subifds) and self._subifdslevel < 0: if self._subifds: subifds = self._subifds elif hasattr(subifds, '__len__'): # allow TiffPage.subifds tuple subifds = len(subifds) # type: ignore[arg-type] else: subifds = int(subifds) # type: ignore[arg-type] self._subifds = subifds addtag( tags, 330, 18 if offsetsize > 4 else 13, subifds, [0] * subifds ) if not bilevel and not datadtype.kind == 'u': # SampleFormat sampleformat = {'u': 1, 'i': 2, 'f': 3, 'c': 6}[datadtype.kind] addtag( tags, 339, 3, storedshape.samples, (sampleformat,) * storedshape.samples, ) if colormap is not None: addtag(tags, 320, 3, colormap.size, colormap) if iccprofile is not None: addtag(tags, 34675, 7, len(iccprofile), iccprofile) addtag(tags, 277, 3, 1, storedshape.samples) if bilevel: # PlanarConfiguration if storedshape.samples > 1: addtag(tags, 284, 3, 1, storedshape.planarconfig) elif storedshape.samples > 1: # PlanarConfiguration addtag(tags, 284, 3, 1, storedshape.planarconfig) # BitsPerSample addtag( tags, 258, 3, storedshape.samples, (bitspersample,) * storedshape.samples, ) else: addtag(tags, 258, 3, 1, bitspersample) if storedshape.extrasamples > 0: if extrasamples is not None: if storedshape.extrasamples != len(extrasamples): raise ValueError( 'wrong number of extrasamples ' f'{storedshape.extrasamples} != {len(extrasamples)}' ) addtag(tags, 338, 3, len(extrasamples), extrasamples) elif photometric == RGB and storedshape.extrasamples == 1: # Unassociated alpha channel addtag(tags, 338, 3, 1, 2) else: # Unspecified alpha channel addtag( tags, 338, 3, storedshape.extrasamples, (0,) * storedshape.extrasamples, ) if jpegtables is not None: addtag(tags, 347, 7, len(jpegtables), jpegtables) if ( compressiontag == 7 and storedshape.planarconfig == 1 and photometric in {RGB, YCBCR} ): # JPEG compression with subsampling # TODO: use JPEGTables for multiple tiles or strips if subsampling is None: subsampling = (2, 2) elif subsampling not in {(1, 1), (2, 1), (2, 2), (4, 1)}: raise ValueError( f'invalid subsampling factors {subsampling!r}' ) maxsampling = max(subsampling) * 8 if tile and (tile[-1] % maxsampling or tile[-2] % maxsampling): raise ValueError(f'tile shape not a multiple of {maxsampling}') if storedshape.extrasamples > 1: raise ValueError('JPEG subsampling requires RGB(A) images') addtag(tags, 530, 3, 2, subsampling) # YCbCrSubSampling # use PhotometricInterpretation YCBCR by default outcolorspace = enumarg( PHOTOMETRIC, compressionargs.get('outcolorspace', 6) ) compressionargs['subsampling'] = subsampling compressionargs['colorspace'] = photometric.name compressionargs['outcolorspace'] = outcolorspace.name addtag(tags, 262, 3, 1, outcolorspace) if outcolorspace == YCBCR: # ReferenceBlackWhite is required for YCBCR if all(et[0] != 532 for et in extratags): addtag( tags, 532, 5, 6, (0, 1, 255, 1, 128, 1, 255, 1, 128, 1, 255, 1), ) else: if subsampling not in {None, (1, 1)}: logger().warning( f'{self!r} cannot apply subsampling {subsampling!r}' ) subsampling = None maxsampling = 1 addtag( tags, 262, 3, 1, photometric.value ) # PhotometricInterpretation if photometric == YCBCR: # YCbCrSubSampling and ReferenceBlackWhite addtag(tags, 530, 3, 2, (1, 1)) if all(et[0] != 532 for et in extratags): addtag( tags, 532, 5, 6, (0, 1, 255, 1, 128, 1, 255, 1, 128, 1, 255, 1), ) if resolutionunit is not None: resolutionunit = enumarg(RESUNIT, resolutionunit) elif self._imagej or resolution is None: resolutionunit = RESUNIT.NONE else: resolutionunit = RESUNIT.INCH if resolution is not None: addtag(tags, 282, 5, 1, rational(resolution[0])) # XResolution addtag(tags, 283, 5, 1, rational(resolution[1])) # YResolution if len(resolution) > 2: # TODO: unreachable raise ValueError( "passing a unit along with the 'resolution' parameter " "was deprecated in 2022.7.28. " "Use the 'resolutionunit' parameter.", ) addtag(tags, 296, 3, 1, resolutionunit) # ResolutionUnit else: addtag(tags, 282, 5, 1, (1, 1)) # XResolution addtag(tags, 283, 5, 1, (1, 1)) # YResolution addtag(tags, 296, 3, 1, resolutionunit) # ResolutionUnit # can save data array contiguous contiguous = not (compression or packints or bilevel) if tile: # one chunk per tile per plane if len(tile) == 2: tiles = ( (storedshape.length + tile[0] - 1) // tile[0], (storedshape.width + tile[1] - 1) // tile[1], ) contiguous = ( contiguous and storedshape.length == tile[0] and storedshape.width == tile[1] ) else: tiles = ( (storedshape.depth + tile[0] - 1) // tile[0], (storedshape.length + tile[1] - 1) // tile[1], (storedshape.width + tile[2] - 1) // tile[2], ) contiguous = ( contiguous and storedshape.depth == tile[0] and storedshape.length == tile[1] and storedshape.width == tile[2] ) numtiles = product(tiles) * storedshape.separate_samples databytecounts = [ product(tile) * storedshape.contig_samples * datadtype.itemsize ] * numtiles bytecountformat = self._bytecount_format( databytecounts, compressiontag ) addtag( tags, tagbytecounts, bytecountformat, numtiles, databytecounts ) addtag(tags, tagoffsets, offsetformat, numtiles, [0] * numtiles) bytecountformat = f'{numtiles}{bytecountformat}' if not contiguous: if dataarray is not None: dataiter = iter_tiles(dataarray, tile, tiles) elif dataiter is None and not ( compression or packints or bilevel ): def dataiter_( numtiles: int = numtiles * storedshape.frames, bytecount: int = databytecounts[0], ) -> Iterator[bytes]: # yield empty tiles chunk = bytes(bytecount) for _ in range(numtiles): yield chunk dataiter = dataiter_() rowsperstrip = 0 elif contiguous and ( rowsperstrip is None or rowsperstrip >= storedshape.length ): count = storedshape.separate_samples * storedshape.depth databytecounts = [ storedshape.length * storedshape.width * storedshape.contig_samples * datadtype.itemsize ] * count bytecountformat = self._bytecount_format( databytecounts, compressiontag ) addtag(tags, tagbytecounts, bytecountformat, count, databytecounts) addtag(tags, tagoffsets, offsetformat, count, [0] * count) addtag(tags, 278, 4, 1, storedshape.length) # RowsPerStrip bytecountformat = f'{count}{bytecountformat}' rowsperstrip = storedshape.length numstrips = count else: # use rowsperstrip rowsize = ( storedshape.width * storedshape.contig_samples * datadtype.itemsize ) if compressiontag == 48124: # Jetraw works on whole camera frame rowsperstrip = storedshape.length if rowsperstrip is None: # compress ~256 KB chunks by default # TIFF-EP requires <= 64 KB if compression: rowsperstrip = 262144 // rowsize else: rowsperstrip = storedshape.length if rowsperstrip < 1: rowsperstrip = maxsampling elif rowsperstrip > storedshape.length: rowsperstrip = storedshape.length elif subsampling and rowsperstrip % maxsampling: rowsperstrip = ( math.ceil(rowsperstrip / maxsampling) * maxsampling ) assert rowsperstrip is not None addtag(tags, 278, 4, 1, rowsperstrip) # RowsPerStrip numstrips1 = ( storedshape.length + rowsperstrip - 1 ) // rowsperstrip numstrips = ( numstrips1 * storedshape.separate_samples * storedshape.depth ) # TODO: save bilevel data with rowsperstrip stripsize = rowsperstrip * rowsize databytecounts = [stripsize] * numstrips laststripsize = stripsize - rowsize * ( numstrips1 * rowsperstrip - storedshape.length ) for i in range(numstrips1 - 1, numstrips, numstrips1): databytecounts[i] = laststripsize bytecountformat = self._bytecount_format( databytecounts, compressiontag ) addtag( tags, tagbytecounts, bytecountformat, numstrips, databytecounts ) addtag(tags, tagoffsets, offsetformat, numstrips, [0] * numstrips) bytecountformat = bytecountformat * numstrips if dataarray is not None and not contiguous: dataiter = iter_images(dataarray) if dataiter is None and not contiguous: raise ValueError('cannot write non-contiguous empty file') # add extra tags from user; filter duplicate and select tags extratag: TagTuple tagset = {t[0] for t in tags} tagset.update(TIFF.TAG_FILTERED) for extratag in extratags: if extratag[0] in tagset: logger().warning( f'{self!r} not writing extratag {extratag[0]}' ) else: addtag(tags, *extratag) del tagset del extratags # TODO: check TIFFReadDirectoryCheckOrder warning in files containing # multiple tags of same code # the entries in an IFD must be sorted in ascending order by tag code tags = sorted(tags, key=lambda x: x[0]) # define compress function compressionaxis: int = -2 bytesiter: bool = False iteritem: NDArray[Any] | bytes | None if dataiter is not None: iteritem, dataiter = peek_iterator(dataiter) bytesiter = isinstance(iteritem, bytes) if not bytesiter: iteritem = numpy.asarray(iteritem) if ( tile and storedshape.contig_samples == 1 and iteritem.shape[-1] != 1 ): # issue 185 compressionaxis = -1 if iteritem.dtype.char != datadtype.char: raise ValueError( f'dtype of iterator {iteritem.dtype!r} ' f'does not match dtype {datadtype!r}' ) else: iteritem = None if bilevel: if compressiontag == 1: def compressionfunc1( data: Any, axis: int = compressionaxis ) -> bytes: return numpy.packbits(data, axis=axis).tobytes() compressionfunc = compressionfunc1 elif compressiontag in {5, 32773, 8, 32946, 50013, 34925, 50000}: # LZW, PackBits, deflate, LZMA, ZSTD def compressionfunc2( data: Any, compressor: Any = TIFF.COMPRESSORS[compressiontag], axis: int = compressionaxis, kwargs: Any = compressionargs, ) -> bytes: data = numpy.packbits(data, axis=axis).tobytes() return compressor(data, **kwargs) compressionfunc = compressionfunc2 else: raise NotImplementedError('cannot compress bilevel image') elif compression: compressor = TIFF.COMPRESSORS[compressiontag] if compressiontag == 32773: # PackBits compressionargs['axis'] = compressionaxis # elif compressiontag == 48124: # # Jetraw # imagecodecs.jetraw_init( # parameters=compressionargs.pop('parameters', None), # verbose=compressionargs.pop('verbose', None), # ) # if not 'identifier' in compressionargs: # raise ValueError( # "jetraw_encode() missing argument: 'identifier'" # ) if subsampling: # JPEG with subsampling def compressionfunc( data: Any, compressor: Any = compressor, kwargs: Any = compressionargs, ) -> bytes: return compressor(data, **kwargs) elif predictorfunc is not None: def compressionfunc( data: Any, predictorfunc: Any = predictorfunc, compressor: Any = compressor, axis: int = compressionaxis, kwargs: Any = compressionargs, ) -> bytes: data = predictorfunc(data, axis=axis) return compressor(data, **kwargs) elif compressionargs: def compressionfunc( data: Any, compressor: Any = compressor, kwargs: Any = compressionargs, ) -> bytes: return compressor(data, **kwargs) elif compressiontag > 1: compressionfunc = compressor else: compressionfunc = None elif packints: def compressionfunc( data: Any, bps: Any = bitspersample, axis: int = compressionaxis, ) -> bytes: return imagecodecs.packints_encode(data, bps, axis=axis) else: compressionfunc = None del compression if not contiguous and not bytesiter and compressionfunc is not None: # create iterator of encoded tiles or strips bytesiter = True if tile: # dataiter yields tiles tileshape = tile + (storedshape.contig_samples,) tilesize = product(tileshape) * datadtype.itemsize maxworkers = TiffWriter._maxworkers( maxworkers, numtiles * storedshape.frames, tilesize, compressiontag, ) # yield encoded tiles dataiter = encode_chunks( numtiles * storedshape.frames, dataiter, # type: ignore[arg-type] compressionfunc, tileshape, datadtype, maxworkers, buffersize, True, ) else: # dataiter yields frames maxworkers = TiffWriter._maxworkers( maxworkers, numstrips * storedshape.frames, stripsize, compressiontag, ) # yield strips dataiter = iter_strips( dataiter, # type: ignore[arg-type] storedshape.page_shape, datadtype, rowsperstrip, ) # yield encoded strips dataiter = encode_chunks( numstrips * storedshape.frames, dataiter, compressionfunc, ( rowsperstrip, storedshape.width, storedshape.contig_samples, ), datadtype, maxworkers, buffersize, False, ) fhpos = fh.tell() # commented out to allow image data beyond 4GB in classic TIFF # if ( # not ( # offsetsize > 4 # or self._imagej or compressionfunc is not None # ) # and fhpos + datasize > 2**32 - 1 # ): # raise ValueError('data too large for classic TIFF format') dataoffset: int = 0 # if not compressed or multi-tiled, write the first IFD and then # all data contiguously; else, write all IFDs and data interleaved for pageindex in range(1 if contiguous else storedshape.frames): ifdpos = fhpos if ifdpos % 2: # position of IFD must begin on a word boundary fh.write(b'\x00') ifdpos += 1 if self._subifdslevel < 0: # update pointer at ifdoffset fh.seek(self._ifdoffset) fh.write(pack(offsetformat, ifdpos)) fh.seek(ifdpos) # create IFD in memory if pageindex < 2: subifdsoffsets = None ifd = io.BytesIO() ifd.write(pack(tagnoformat, len(tags))) tagoffset = ifd.tell() ifd.write(b''.join(t[1] for t in tags)) ifdoffset = ifd.tell() ifd.write(pack(offsetformat, 0)) # offset to next IFD # write tag values and patch offsets in ifdentries for tagindex, tag in enumerate(tags): offset = tagoffset + tagindex * tagsize + 4 + offsetsize code = tag[0] value = tag[2] if value: pos = ifd.tell() if pos % 2: # tag value is expected to begin on word boundary ifd.write(b'\x00') pos += 1 ifd.seek(offset) ifd.write(pack(offsetformat, ifdpos + pos)) ifd.seek(pos) ifd.write(value) if code == tagoffsets: dataoffsetsoffset = offset, pos elif code == tagbytecounts: databytecountsoffset = offset, pos elif code == 270: if ( self._descriptiontag is not None and self._descriptiontag.offset == 0 and value.startswith( self._descriptiontag.value ) ): self._descriptiontag.offset = ( ifdpos + tagoffset + tagindex * tagsize ) self._descriptiontag.valueoffset = ifdpos + pos elif code == 330: subifdsoffsets = offset, pos elif code == tagoffsets: dataoffsetsoffset = offset, None elif code == tagbytecounts: databytecountsoffset = offset, None elif code == 270: if ( self._descriptiontag is not None and self._descriptiontag.offset == 0 and self._descriptiontag.value in tag[1][-4:] ): self._descriptiontag.offset = ( ifdpos + tagoffset + tagindex * tagsize ) self._descriptiontag.valueoffset = ( self._descriptiontag.offset + offsetsize + 4 ) elif code == 330: subifdsoffsets = offset, None ifdsize = ifd.tell() if ifdsize % 2: ifd.write(b'\x00') ifdsize += 1 # write IFD later when strip/tile bytecounts and offsets are known fh.seek(ifdsize, os.SEEK_CUR) # write image data dataoffset = fh.tell() if align is None: align = 16 skip = (align - (dataoffset % align)) % align fh.seek(skip, os.SEEK_CUR) dataoffset += skip if contiguous: # write all image data contiguously if dataiter is not None: byteswritten = 0 if bytesiter: for iteritem in dataiter: # assert isinstance(iteritem, bytes) byteswritten += fh.write( iteritem # type: ignore[arg-type] ) del iteritem else: pagesize = storedshape.page_size * datadtype.itemsize for iteritem in dataiter: if iteritem is None: byteswritten += fh.write_empty(pagesize) else: # assert isinstance(iteritem, numpy.ndarray) byteswritten += fh.write_array( iteritem, # type: ignore[arg-type] datadtype, ) del iteritem if byteswritten != datasize: raise ValueError( 'iterator contains wrong number of bytes ' f'{byteswritten} != {datasize}' ) elif dataarray is None: fh.write_empty(datasize) else: fh.write_array(dataarray, datadtype) elif bytesiter: # write tiles or strips assert dataiter is not None for chunkindex in range(numtiles if tile else numstrips): iteritem = cast(bytes, next(dataiter)) # assert isinstance(iteritem, bytes) databytecounts[chunkindex] = len(iteritem) fh.write(iteritem) del iteritem elif tile: # write uncompressed tiles assert dataiter is not None tileshape = tile + (storedshape.contig_samples,) tilesize = product(tileshape) * datadtype.itemsize for tileindex in range(numtiles): iteritem = next(dataiter) if iteritem is None: databytecounts[tileindex] = 0 # fh.write_empty(tilesize) continue # assert not isinstance(iteritem, bytes) iteritem = numpy.ascontiguousarray(iteritem, datadtype) if iteritem.nbytes != tilesize: # if iteritem.dtype != datadtype: # raise ValueError( # 'dtype of tile does not match data' # ) if iteritem.nbytes > tilesize: raise ValueError('tile is too large') pad = tuple( (0, i - j) for i, j in zip(tileshape, iteritem.shape) ) iteritem = numpy.pad(iteritem, pad) fh.write_array(iteritem) del iteritem else: raise RuntimeError('unreachable code') # update strip/tile offsets assert dataoffsetsoffset is not None offset, pos = dataoffsetsoffset ifd.seek(offset) if pos is not None: ifd.write(pack(offsetformat, ifdpos + pos)) ifd.seek(pos) offset = dataoffset for size in databytecounts: ifd.write(pack(offsetformat, offset if size > 0 else 0)) offset += size else: ifd.write(pack(offsetformat, dataoffset)) if compressionfunc is not None or (tile and dataarray is None): # update strip/tile bytecounts assert databytecountsoffset is not None offset, pos = databytecountsoffset ifd.seek(offset) if pos is not None: ifd.write(pack(offsetformat, ifdpos + pos)) ifd.seek(pos) ifd.write(pack(bytecountformat, *databytecounts)) if subifdsoffsets is not None: # update and save pointer to SubIFDs tag values if necessary offset, pos = subifdsoffsets if pos is not None: ifd.seek(offset) ifd.write(pack(offsetformat, ifdpos + pos)) self._subifdsoffsets.append(ifdpos + pos) else: self._subifdsoffsets.append(ifdpos + offset) fhpos = fh.tell() fh.seek(ifdpos) fh.write(ifd.getbuffer()) fh.flush() if self._subifdslevel < 0: self._ifdoffset = ifdpos + ifdoffset else: # update SubIFDs tag values fh.seek( self._subifdsoffsets[self._ifdindex] + self._subifdslevel * offsetsize ) fh.write(pack(offsetformat, ifdpos)) # update SubIFD chain offsets if self._subifdslevel == 0: self._nextifdoffsets.append(ifdpos + ifdoffset) else: fh.seek(self._nextifdoffsets[self._ifdindex]) fh.write(pack(offsetformat, ifdpos)) self._nextifdoffsets[self._ifdindex] = ifdpos + ifdoffset self._ifdindex += 1 self._ifdindex %= len(self._subifdsoffsets) fh.seek(fhpos) # remove tags that should be written only once if pageindex == 0: tags = [tag for tag in tags if not tag[-1]] assert dataoffset > 0 self._datashape = (1,) + inputshape self._datadtype = datadtype self._dataoffset = dataoffset self._databytecounts = databytecounts self._storedshape = storedshape if contiguous: # write remaining IFDs/tags later self._tags = tags # return offset and size of image data if returnoffset: return dataoffset, sum(databytecounts) return None def overwrite_description(self, description: str, /) -> None: """Overwrite value of last ImageDescription tag. Can be used to write OME-XML after writing images. Ends a contiguous series. """ if self._descriptiontag is None: raise ValueError('no ImageDescription tag found') self._write_remaining_pages() self._descriptiontag.overwrite(description, erase=False) self._descriptiontag = None def close(self) -> None: """Write remaining pages and close file handle.""" try: if not self._truncate: self._write_remaining_pages() self._write_image_description() finally: try: self._fh.close() except Exception: pass @property def filehandle(self) -> FileHandle: """File handle to write file.""" return self._fh def _write_remaining_pages(self) -> None: """Write outstanding IFDs and tags to file.""" if not self._tags or self._truncate or self._datashape is None: return assert self._storedshape is not None assert self._databytecounts is not None assert self._dataoffset is not None pageno: int = self._storedshape.frames * self._datashape[0] - 1 if pageno < 1: self._tags = None self._dataoffset = None self._databytecounts = None return fh = self._fh fhpos: int = fh.tell() if fhpos % 2: fh.write(b'\x00') fhpos += 1 pack = struct.pack offsetformat: str = self.tiff.offsetformat offsetsize: int = self.tiff.offsetsize tagnoformat: str = self.tiff.tagnoformat tagsize: int = self.tiff.tagsize dataoffset: int = self._dataoffset pagedatasize: int = sum(self._databytecounts) subifdsoffsets: tuple[int, int | None] | None = None dataoffsetsoffset: tuple[int, int | None] pos: int | None offset: int # construct template IFD in memory # must patch offsets to next IFD and data before writing to file ifd = io.BytesIO() ifd.write(pack(tagnoformat, len(self._tags))) tagoffset = ifd.tell() ifd.write(b''.join(t[1] for t in self._tags)) ifdoffset = ifd.tell() ifd.write(pack(offsetformat, 0)) # offset to next IFD # tag values for tagindex, tag in enumerate(self._tags): offset = tagoffset + tagindex * tagsize + offsetsize + 4 code = tag[0] value = tag[2] if value: pos = ifd.tell() if pos % 2: # tag value is expected to begin on word boundary ifd.write(b'\x00') pos += 1 ifd.seek(offset) try: ifd.write(pack(offsetformat, fhpos + pos)) except Exception as exc: # struct.error if self._imagej: warnings.warn( f'{self!r} truncating ImageJ file', UserWarning ) self._truncate = True return raise ValueError( 'data too large for non-BigTIFF file' ) from exc ifd.seek(pos) ifd.write(value) if code == self._dataoffsetstag: # save strip/tile offsets for later updates dataoffsetsoffset = offset, pos elif code == 330: # save subifds offsets for later updates subifdsoffsets = offset, pos elif code == self._dataoffsetstag: dataoffsetsoffset = offset, None elif code == 330: subifdsoffsets = offset, None ifdsize = ifd.tell() if ifdsize % 2: ifd.write(b'\x00') ifdsize += 1 # check if all IFDs fit in file if offsetsize < 8 and fhpos + ifdsize * pageno > 2**32 - 32: if self._imagej: warnings.warn(f'{self!r} truncating ImageJ file', UserWarning) self._truncate = True return raise ValueError('data too large for non-BigTIFF file') # assemble IFD chain in memory from IFD template ifds = io.BytesIO(bytes(ifdsize * pageno)) ifdpos = fhpos for _ in range(pageno): # update strip/tile offsets in IFD dataoffset += pagedatasize # offset to image data offset, pos = dataoffsetsoffset ifd.seek(offset) if pos is not None: ifd.write(pack(offsetformat, ifdpos + pos)) ifd.seek(pos) offset = dataoffset for size in self._databytecounts: ifd.write(pack(offsetformat, offset)) offset += size else: ifd.write(pack(offsetformat, dataoffset)) if subifdsoffsets is not None: offset, pos = subifdsoffsets self._subifdsoffsets.append( ifdpos + (pos if pos is not None else offset) ) if self._subifdslevel < 0: if subifdsoffsets is not None: # update pointer to SubIFDs tag values if necessary offset, pos = subifdsoffsets if pos is not None: ifd.seek(offset) ifd.write(pack(offsetformat, ifdpos + pos)) # update pointer at ifdoffset to point to next IFD in file ifdpos += ifdsize ifd.seek(ifdoffset) ifd.write(pack(offsetformat, ifdpos)) else: # update SubIFDs tag values in file fh.seek( self._subifdsoffsets[self._ifdindex] + self._subifdslevel * offsetsize ) fh.write(pack(offsetformat, ifdpos)) # update SubIFD chain if self._subifdslevel == 0: self._nextifdoffsets.append(ifdpos + ifdoffset) else: fh.seek(self._nextifdoffsets[self._ifdindex]) fh.write(pack(offsetformat, ifdpos)) self._nextifdoffsets[self._ifdindex] = ifdpos + ifdoffset self._ifdindex += 1 self._ifdindex %= len(self._subifdsoffsets) ifdpos += ifdsize # write IFD entry ifds.write(ifd.getbuffer()) # terminate IFD chain ifdoffset += ifdsize * (pageno - 1) ifds.seek(ifdoffset) ifds.write(pack(offsetformat, 0)) # write IFD chain to file fh.seek(fhpos) fh.write(ifds.getbuffer()) if self._subifdslevel < 0: # update file to point to new IFD chain pos = fh.tell() fh.seek(self._ifdoffset) fh.write(pack(offsetformat, fhpos)) fh.flush() fh.seek(pos) self._ifdoffset = fhpos + ifdoffset self._tags = None self._dataoffset = None self._databytecounts = None # do not reset _storedshape, _datashape, _datadtype def _write_image_description(self) -> None: """Write metadata to ImageDescription tag.""" if self._datashape is None or self._descriptiontag is None: self._descriptiontag = None return assert self._storedshape is not None assert self._datadtype is not None if self._omexml is not None: if self._subifdslevel < 0: assert self._metadata is not None self._omexml.addimage( dtype=self._datadtype, shape=self._datashape[ 0 if self._datashape[0] != 1 else 1 : ], storedshape=self._storedshape.shape, **self._metadata, ) description = self._omexml.tostring(declaration=True) elif self._datashape[0] == 1: # description already up-to-date self._descriptiontag = None return # elif self._subifdslevel >= 0: # # don't write metadata to SubIFDs # return elif self._imagej: assert self._metadata is not None colormapped = self._colormap is not None isrgb = self._storedshape.samples in {3, 4} description = imagej_description( self._datashape, rgb=isrgb, colormaped=colormapped, **self._metadata, ) elif not self._tifffile: self._descriptiontag = None return else: assert self._metadata is not None description = shaped_description(self._datashape, **self._metadata) self._descriptiontag.overwrite(description.encode(), erase=False) self._descriptiontag = None def _addtag( self, tags: list[tuple[int, bytes, bytes | None, bool]], code: int | str, dtype: int | str, count: int | None, value: Any, writeonce: bool = False, /, ) -> None: """Append (code, ifdentry, ifdvalue, writeonce) to tags list. Compute ifdentry and ifdvalue bytes from code, dtype, count, value. """ pack = self._pack if not isinstance(code, int): code = TIFF.TAGS[code] try: datatype = cast(int, dtype) dataformat = TIFF.DATA_FORMATS[datatype][-1] except KeyError as exc: try: dataformat = cast(str, dtype) if dataformat[0] in '<>': dataformat = dataformat[1:] datatype = TIFF.DATA_DTYPES[dataformat] except (KeyError, TypeError): raise ValueError(f'unknown dtype {dtype}') from exc del dtype rawcount = count if datatype == 2: # string if isinstance(value, str): # enforce 7-bit ASCII on Unicode strings try: value = value.encode('ascii') except UnicodeEncodeError as exc: raise ValueError( 'TIFF strings must be 7-bit ASCII' ) from exc elif not isinstance(value, bytes): raise ValueError('TIFF strings must be 7-bit ASCII') if len(value) == 0 or value[-1:] != b'\x00': value += b'\x00' count = len(value) if code == 270: rawcount = int(value.find(b'\x00\x00')) if rawcount < 0: rawcount = count else: # length of string without buffer rawcount = max(self.tiff.offsetsize + 1, rawcount + 1) rawcount = min(count, rawcount) else: rawcount = count value = (value,) elif isinstance(value, bytes): # packed binary data itemsize = struct.calcsize(dataformat) if len(value) % itemsize: raise ValueError('invalid packed binary data') count = len(value) // itemsize rawcount = count elif count is None: raise ValueError('invalid count') else: count = int(count) if datatype in {5, 10}: # rational count *= 2 dataformat = dataformat[-1] ifdentry = [ pack('HH', code, datatype), pack(self.tiff.offsetformat, rawcount), ] ifdvalue = None if struct.calcsize(dataformat) * count <= self.tiff.offsetsize: # value(s) can be written directly valueformat = f'{self.tiff.offsetsize}s' if isinstance(value, bytes): ifdentry.append(pack(valueformat, value)) elif count == 1: if isinstance(value, (tuple, list, numpy.ndarray)): value = value[0] ifdentry.append(pack(valueformat, pack(dataformat, value))) else: ifdentry.append( pack(valueformat, pack(f'{count}{dataformat}', *value)) ) else: # use offset to value(s) ifdentry.append(pack(self.tiff.offsetformat, 0)) if isinstance(value, bytes): ifdvalue = value elif isinstance(value, numpy.ndarray): if value.size != count: raise RuntimeError('value.size != count') if value.dtype.char != dataformat: raise RuntimeError('value.dtype.char != dtype') ifdvalue = value.tobytes() elif isinstance(value, (tuple, list)): ifdvalue = pack(f'{count}{dataformat}', *value) else: ifdvalue = pack(dataformat, value) tags.append((code, b''.join(ifdentry), ifdvalue, writeonce)) def _pack(self, fmt: str, *val: Any) -> bytes: """Return values packed to bytes according to format.""" if fmt[0] not in '<>': fmt = self.tiff.byteorder + fmt return struct.pack(fmt, *val) def _bytecount_format( self, bytecounts: Sequence[int], compression: int, / ) -> str: """Return small bytecount format.""" if len(bytecounts) == 1: return self.tiff.offsetformat[1] bytecount = bytecounts[0] if compression > 1: bytecount = bytecount * 10 if bytecount < 2**16: return 'H' if bytecount < 2**32: return 'I' return self.tiff.offsetformat[1] @staticmethod def _maxworkers( maxworkers: int | None, numchunks: int, chunksize: int, compression: int, ) -> int: """Return number of threads to encode segments.""" if maxworkers is not None: return maxworkers if ( # imagecodecs is None or compression <= 1 or numchunks < 2 or chunksize < 1024 or compression == 48124 # Jetraw is not thread-safe? ): return 1 # the following is based on benchmarking RGB tile sizes vs maxworkers # using a (8228, 11500, 3) uint8 WSI slide: if chunksize < 131072 and compression in { 7, # JPEG 33007, # ALT_JPG 34892, # JPEG_LOSSY 32773, # PackBits 34887, # LERC }: return 1 if chunksize < 32768 and compression in { 5, # LZW 8, # zlib 32946, # zlib 50000, # zstd 50013, # zlib/pixtiff }: # zlib, return 1 if chunksize < 8192 and compression in { 34934, # JPEG XR 22610, # JPEG XR 34933, # PNG }: return 1 if chunksize < 2048 and compression in { 33003, # JPEG2000 33004, # JPEG2000 33005, # JPEG2000 34712, # JPEG2000 50002, # JPEG XL 52546, # JPEG XL DNG }: return 1 if chunksize < 1024 and compression in { 34925, # LZMA 50001, # WebP }: return 1 if compression == 34887: # LERC # limit to 4 threads return min(numchunks, 4) return min(numchunks, TIFF.MAXWORKERS) def __enter__(self) -> TiffWriter: return self def __exit__(self, exc_type: Any, exc_value: Any, traceback: Any) -> None: self.close() def __repr__(self) -> str: return f'' @final class TiffFile: """Read image and metadata from TIFF file. TiffFile instances must be closed with :py:meth:`TiffFile.close`, which is automatically called when using the 'with' context manager. TiffFile instances are not thread-safe. All attributes are read-only. Parameters: file: Specifies TIFF file to read. Open file objects must be positioned at the TIFF header. mode: File open mode if `file` is file name. The default is 'rb'. name: Name of file if `file` is file handle. offset: Start position of embedded file. The default is the current file position. size: Size of embedded file. The default is the number of bytes from the `offset` to the end of the file. omexml: OME metadata in XML format, for example, from external companion file or sanitized XML overriding XML in file. _multifile, _useframes, _parent: Internal use. **is_flags: Override `TiffFile.is_` flags, for example: ``is_ome=False``: disable processing of OME-XML metadata. ``is_lsm=False``: disable special handling of LSM files. ``is_ndpi=True``: force file to be NDPI format. Raises: TiffFileError: Invalid TIFF structure. """ tiff: TiffFormat """Properties of TIFF file format.""" pages: TiffPages """Sequence of pages in TIFF file.""" _fh: FileHandle _multifile: bool _parent: TiffFile # OME master file _files: dict[str | None, TiffFile] # cache of TiffFile instances _omexml: str | None # external OME-XML _decoders: dict[ # cache of TiffPage.decode functions int, Callable[ ..., tuple[ NDArray[Any] | None, tuple[int, int, int, int, int], tuple[int, int, int, int], ], ], ] def __init__( self, file: str | os.PathLike[Any] | FileHandle | IO[bytes], /, *, mode: Literal['r', 'r+'] | None = None, name: str | None = None, offset: int | None = None, size: int | None = None, omexml: str | None = None, _multifile: bool | None = None, _useframes: bool | None = None, _parent: TiffFile | None = None, **is_flags: bool | None, ) -> None: for key, value in is_flags.items(): if key[:3] == 'is_' and key[3:] in TIFF.FILE_FLAGS: if value is not None: setattr(self, key, bool(value)) else: raise TypeError(f'unexpected keyword argument: {key}') if mode not in {None, 'r', 'r+', 'rb', 'r+b'}: raise ValueError(f'invalid mode {mode!r}') self._omexml = None if omexml: if omexml.strip()[-4:] != 'OME>': raise ValueError('invalid OME-XML') self._omexml = omexml self.is_ome = True fh = FileHandle(file, mode=mode, name=name, offset=offset, size=size) self._fh = fh self._multifile = True if _multifile is None else bool(_multifile) self._files = {fh.name: self} self._decoders = {} self._parent = self if _parent is None else _parent try: fh.seek(0) header = fh.read(4) try: byteorder = {b'II': '<', b'MM': '>', b'EP': '<'}[header[:2]] except KeyError as exc: raise TiffFileError(f'not a TIFF file {header!r}') from exc version = struct.unpack(byteorder + 'H', header[2:4])[0] if version == 43: # BigTiff offsetsize, zero = struct.unpack(byteorder + 'HH', fh.read(4)) if zero != 0 or offsetsize != 8: raise TiffFileError( f'invalid BigTIFF offset size {(offsetsize, zero)}' ) if byteorder == '>': self.tiff = TIFF.BIG_BE else: self.tiff = TIFF.BIG_LE elif version == 42: # Classic TIFF if byteorder == '>': self.tiff = TIFF.CLASSIC_BE elif is_flags.get('is_ndpi', fh.extension == '.ndpi'): # NDPI uses 64 bit IFD offsets if is_flags.get('is_ndpi', True): self.tiff = TIFF.NDPI_LE else: self.tiff = TIFF.CLASSIC_LE else: self.tiff = TIFF.CLASSIC_LE elif version == 0x4E31: # NIFF if byteorder == '>': raise TiffFileError('invalid NIFF file') logger().error(f'{self!r} NIFF format not supported') self.tiff = TIFF.CLASSIC_LE elif version in {0x55, 0x4F52, 0x5352}: # Panasonic or Olympus RAW logger().error( f'{self!r} RAW format 0x{version:04X} not supported' ) if byteorder == '>': self.tiff = TIFF.CLASSIC_BE else: self.tiff = TIFF.CLASSIC_LE else: raise TiffFileError(f'invalid TIFF version {version}') # file handle is at offset to offset to first page self.pages = TiffPages(self) if self.is_lsm and ( self.filehandle.size >= 2**32 or self.pages[0].compression != 1 or self.pages[1].compression != 1 ): self._lsm_load_pages() elif self.is_scanimage and not self.is_bigtiff: # ScanImage <= 2015 try: self.pages._load_virtual_frames() except Exception as exc: logger().error( f'{self!r} ' f'raised {exc!r:.128}' ) elif self.is_ndpi: try: self._ndpi_load_pages() except Exception as exc: logger().error( f'{self!r} <_ndpi_load_pages> raised {exc!r:.128}' ) elif _useframes: self.pages.useframes = True except Exception: fh.close() raise @property def byteorder(self) -> Literal['>', '<']: """Byteorder of TIFF file.""" return self.tiff.byteorder @property def filehandle(self) -> FileHandle: """File handle.""" return self._fh @property def filename(self) -> str: """Name of file handle.""" return self._fh.name @cached_property def fstat(self) -> Any: """Status of file handle's descriptor, if any.""" try: return os.fstat(self._fh.fileno()) except Exception: # io.UnsupportedOperation return None def close(self) -> None: """Close open file handle(s).""" for tif in self._files.values(): tif.filehandle.close() def asarray( self, key: int | slice | Iterable[int] | None = None, *, series: int | TiffPageSeries | None = None, level: int | None = None, squeeze: bool | None = None, out: OutputType = None, maxworkers: int | None = None, buffersize: int | None = None, ) -> NDArray[Any]: """Return images from select pages as NumPy array. By default, the image array from the first level of the first series is returned. Parameters: key: Specifies which pages to return as array. By default, the image of the specified `series` and `level` is returned. If not *None*, the images from the specified pages in the whole file (if `series` is *None*) or a specified series are returned as a stacked array. Requesting an array from multiple pages that are not compatible wrt. shape, dtype, compression etc. is undefined, that is, it may crash or return incorrect values. series: Specifies which series of pages to return as array. The default is 0. level: Specifies which level of multi-resolution series to return as array. The default is 0. squeeze: If *True*, remove all length-1 dimensions (except X and Y) from array. If *False*, single pages are returned as 5D array of shape :py:attr:`TiffPage.shaped`. For series, the shape of the returned array also includes singlet dimensions specified in some file formats. For example, ImageJ series and most commonly also OME series, are returned in TZCYXS order. By default, all but `"shaped"` series are squeezed. out: Specifies how image array is returned. By default, a new NumPy array is created. If a *numpy.ndarray*, a writable array to which the image is copied. If *'memmap'*, directly memory-map the image data in the file if possible; else create a memory-mapped array in a temporary file. If a *string* or *open file*, the file used to create a memory-mapped array. maxworkers: Maximum number of threads to concurrently decode data from multiple pages or compressed segments. If *None* or *0*, use up to :py:attr:`_TIFF.MAXWORKERS` threads. Reading data from file is limited to the main thread. Using multiple threads can significantly speed up this function if the bottleneck is decoding compressed data, for example, in case of large LZW compressed LSM files or JPEG compressed tiled slides. If the bottleneck is I/O or pure Python code, using multiple threads might be detrimental. buffersize: Approximate number of bytes to read from file in one pass. The default is :py:attr:`_TIFF.BUFFERSIZE`. Returns: Images from specified pages. See `TiffPage.asarray` for operations that are applied (or not) to the image data stored in the file. """ if not self.pages: return numpy.array([]) if key is None and series is None: series = 0 pages: Any # TiffPages | TiffPageSeries | list[TiffPage | TiffFrame] page0: TiffPage | TiffFrame | None if series is None: pages = self.pages else: if not isinstance(series, TiffPageSeries): series = self.series[series] if level is not None: series = series.levels[level] pages = series if key is None: pass elif series is None: pages = pages._getlist(key) elif isinstance(key, (int, numpy.integer)): pages = [pages[int(key)]] elif isinstance(key, slice): pages = pages[key] elif isinstance(key, Iterable) and not isinstance(key, str): pages = [pages[k] for k in key] else: raise TypeError( f'key must be an integer, slice, or sequence, not {type(key)}' ) if pages is None or len(pages) == 0: raise ValueError('no pages selected') if ( key is None and series is not None and series.dataoffset is not None ): typecode = self.byteorder + series.dtype.char if ( series.keyframe.is_memmappable and isinstance(out, str) and out == 'memmap' ): # direct mapping shape = series.get_shape(squeeze) result = self.filehandle.memmap_array( typecode, shape, series.dataoffset ) else: # read into output shape = series.get_shape(squeeze) if out is not None: out = create_output(out, shape, series.dtype) result = self.filehandle.read_array( typecode, series.size, series.dataoffset, out=out, ) elif len(pages) == 1: page0 = pages[0] if page0 is None: raise ValueError('page is None') result = page0.asarray( out=out, maxworkers=maxworkers, buffersize=buffersize ) else: result = stack_pages( pages, out=out, maxworkers=maxworkers, buffersize=buffersize ) assert result is not None if key is None: assert series is not None # TODO: ? shape = series.get_shape(squeeze) try: result.shape = shape except ValueError as exc: try: logger().warning( f'{self!r} failed to reshape ' f'{result.shape} to {shape}, raised {exc!r:.128}' ) # try series of expected shapes result.shape = (-1,) + shape except ValueError: # revert to generic shape result.shape = (-1,) + series.keyframe.shape elif len(pages) == 1: if squeeze is None: squeeze = True page0 = pages[0] if page0 is None: raise ValueError('page is None') result.shape = page0.shape if squeeze else page0.shaped else: if squeeze is None: squeeze = True try: page0 = next(p for p in pages if p is not None) except StopIteration as exc: raise ValueError('pages are all None') from exc assert page0 is not None result.shape = (-1,) + (page0.shape if squeeze else page0.shaped) return result def aszarr( self, key: int | None = None, *, series: int | TiffPageSeries | None = None, level: int | None = None, **kwargs: Any, ) -> ZarrTiffStore: """Return images from select pages as Zarr 2 store. By default, the images from the first series, including all levels, are wrapped as a Zarr 2 store. Parameters: key: Index of page in file (if `series` is None) or series to wrap as Zarr 2 store. By default, a series is wrapped. series: Index of series to wrap as Zarr 2 store. The default is 0 (if `key` is None). level: Index of pyramid level in series to wrap as Zarr 2 store. By default, all levels are included as a multi-scale group. **kwargs: Additional arguments passed to :py:meth:`TiffPage.aszarr` or :py:meth:`TiffPageSeries.aszarr`. """ if not self.pages: raise NotImplementedError('empty Zarr arrays not supported') if key is None and series is None: return self.series[0].aszarr(level=level, **kwargs) pages: Any if series is None: pages = self.pages else: if not isinstance(series, TiffPageSeries): series = self.series[series] if key is None: return series.aszarr(level=level, **kwargs) if level is not None: series = series.levels[level] pages = series if isinstance(key, (int, numpy.integer)): page: TiffPage | TiffFrame = pages[key] return page.aszarr(**kwargs) raise TypeError('key must be an integer index') @cached_property def series(self) -> list[TiffPageSeries]: """Series of pages with compatible shape and data type. Side effect: after accessing this property, `TiffFile.pages` might contain `TiffPage` and `TiffFrame` instead of only `TiffPage` instances. """ if not self.pages: return [] assert self.pages.keyframe is not None useframes = self.pages.useframes keyframe = self.pages.keyframe.index series: list[TiffPageSeries] | None = None for kind in ( 'shaped', 'lsm', 'mmstack', 'ome', 'imagej', 'ndtiff', 'fluoview', 'stk', 'sis', 'svs', 'scn', 'qpi', 'ndpi', 'bif', 'avs', 'philips', 'scanimage', # 'indica', # TODO: rewrite _series_indica() 'nih', 'mdgel', # adds second page to cache 'uniform', ): if getattr(self, 'is_' + kind, False): series = getattr(self, '_series_' + kind)() if not series: if kind == 'ome' and self.is_imagej: # try ImageJ series if OME series fails. # clear pages cache since _series_ome() might leave # some frames without keyframe self.pages._clear() continue if kind == 'mmstack': # try OME, ImageJ, uniform continue break if not series: series = self._series_generic() self.pages.useframes = useframes self.pages.set_keyframe(keyframe) # remove empty series, for example, in MD Gel files # series = [s for s in series if product(s.shape) > 0] assert series is not None for i, s in enumerate(series): s._index = i return series def _series_uniform(self) -> list[TiffPageSeries] | None: """Return all images in file as single series.""" self.pages.useframes = True self.pages.set_keyframe(0) page = self.pages.first validate = not (page.is_scanimage or page.is_nih) pages = self.pages._getlist(validate=validate) if len(pages) == 1: shape = page.shape axes = page.axes else: shape = (len(pages),) + page.shape axes = 'I' + page.axes dtype = page.dtype return [TiffPageSeries(pages, shape, dtype, axes, kind='uniform')] def _series_generic(self) -> list[TiffPageSeries] | None: """Return image series in file. A series is a sequence of TiffPages with the same hash. """ pages = self.pages pages._clear(False) pages.useframes = False if pages.cache: pages._load() series = [] keys = [] seriesdict: dict[int, list[TiffPage | TiffFrame]] = {} def addpage(page: TiffPage | TiffFrame, /) -> None: # add page to seriesdict if not page.shape: # or product(page.shape) == 0: return key = page.hash if key in seriesdict: for p in seriesdict[key]: if p.offset == page.offset: break # remove duplicate page else: seriesdict[key].append(page) else: keys.append(key) seriesdict[key] = [page] for page in pages: addpage(page) if page.subifds is not None: for i, offset in enumerate(page.subifds): if offset < 8: continue try: self._fh.seek(offset) subifd = TiffPage(self, (page.index, i)) except Exception as exc: logger().warning( f'{self!r} generic series raised {exc!r:.128}' ) else: addpage(subifd) for key in keys: pagelist = seriesdict[key] page = pagelist[0] shape = (len(pagelist),) + page.shape axes = 'I' + page.axes if 'S' not in axes: shape += (1,) axes += 'S' series.append( TiffPageSeries( pagelist, shape, page.dtype, axes, kind='generic' ) ) self.is_uniform = len(series) == 1 # replaces is_uniform method if not self.is_agilent: pyramidize_series(series) return series def _series_shaped(self) -> list[TiffPageSeries] | None: """Return image series in tifffile "shaped" formatted file.""" # TODO: all series need to have JSON metadata for this to succeed def append( series: list[TiffPageSeries], pages: list[TiffPage | TiffFrame | None], axes: str | None, shape: tuple[int, ...] | None, reshape: tuple[int, ...], name: str, truncated: bool | None, /, ) -> None: # append TiffPageSeries to series assert isinstance(pages[0], TiffPage) page = pages[0] if not check_shape(page.shape, reshape): logger().warning( f'{self!r} shaped series metadata does not match ' f'page shape {page.shape} != {tuple(reshape)}' ) failed = True else: failed = False if failed or axes is None or shape is None: shape = page.shape axes = page.axes if len(pages) > 1: shape = (len(pages),) + shape axes = 'Q' + axes if failed: reshape = shape size = product(shape) resize = product(reshape) if page.is_contiguous and resize > size and resize % size == 0: if truncated is None: truncated = True axes = 'Q' + axes shape = (resize // size,) + shape try: axes = reshape_axes(axes, shape, reshape) shape = reshape except ValueError as exc: logger().error( f'{self!r} shaped series failed to reshape, ' f'raised {exc!r:.128}' ) series.append( TiffPageSeries( pages, shape, page.dtype, axes, name=name, kind='shaped', truncated=bool(truncated), squeeze=False, ) ) def detect_series( pages: TiffPages | list[TiffPage | TiffFrame | None], series: list[TiffPageSeries], /, ) -> list[TiffPageSeries] | None: shape: tuple[int, ...] | None reshape: tuple[int, ...] page: TiffPage | TiffFrame | None keyframe: TiffPage subifds: list[TiffPage | TiffFrame | None] = [] subifd: TiffPage | TiffFrame keysubifd: TiffPage axes: str | None name: str lenpages = len(pages) index = 0 while True: if index >= lenpages: break if isinstance(pages, TiffPages): # new keyframe; start of new series pages.set_keyframe(index) keyframe = cast(TiffPage, pages.keyframe) else: # pages is list of SubIFDs keyframe = cast(TiffPage, pages[0]) if keyframe.shaped_description is None: logger().error( f'{self!r} ' 'invalid shaped series metadata or corrupted file' ) return None # read metadata axes = None shape = None metadata = shaped_description_metadata( keyframe.shaped_description ) name = metadata.get('name', '') reshape = metadata['shape'] truncated = None if keyframe.subifds is None else False truncated = metadata.get('truncated', truncated) if 'axes' in metadata: axes = cast(str, metadata['axes']) if len(axes) == len(reshape): shape = reshape else: axes = '' logger().error( f'{self!r} shaped series axes do not match shape' ) # skip pages if possible spages: list[TiffPage | TiffFrame | None] = [keyframe] size = product(reshape) if size > 0: npages, mod = divmod(size, product(keyframe.shape)) else: npages = 1 mod = 0 if mod: logger().error( f'{self!r} ' 'shaped series shape does not match page shape' ) return None if 1 < npages <= lenpages - index: assert keyframe._dtype is not None size *= keyframe._dtype.itemsize if truncated: npages = 1 else: page = pages[index + 1] if ( keyframe.is_final and page is not None and keyframe.offset + size < page.offset and keyframe.subifds is None ): truncated = False else: # must read all pages for series truncated = False for j in range(index + 1, index + npages): page = pages[j] assert page is not None page.keyframe = keyframe spages.append(page) append(series, spages, axes, shape, reshape, name, truncated) index += npages # create series from SubIFDs if keyframe.subifds: subifds_size = len(keyframe.subifds) for i, offset in enumerate(keyframe.subifds): if offset < 8: continue subifds = [] for j, page in enumerate(spages): # if page.subifds is not None: try: if ( page is None or page.subifds is None or len(page.subifds) < subifds_size ): raise ValueError( f'{page!r} contains invalid subifds' ) self._fh.seek(page.subifds[i]) if j == 0: subifd = TiffPage(self, (page.index, i)) keysubifd = subifd else: subifd = TiffFrame( self, (page.index, i), keyframe=keysubifd, ) except Exception as exc: logger().error( f'{self!r} shaped series ' f'raised {exc!r:.128}' ) return None subifds.append(subifd) if subifds: series_or_none = detect_series(subifds, series) if series_or_none is None: return None series = series_or_none return series self.pages.useframes = True series = detect_series(self.pages, []) if series is None: return None self.is_uniform = len(series) == 1 pyramidize_series(series, isreduced=True) return series def _series_imagej(self) -> list[TiffPageSeries] | None: """Return image series in ImageJ file.""" # ImageJ's dimension order is TZCYXS # TODO: fix loading of color, composite, or palette images meta = self.imagej_metadata if meta is None: return None pages = self.pages pages.useframes = True pages.set_keyframe(0) page = self.pages.first order = meta.get('order', 'czt').lower() frames = meta.get('frames', 1) slices = meta.get('slices', 1) channels = meta.get('channels', 1) images = meta.get('images', 1) # not reliable if images < 1 or frames < 1 or slices < 1 or channels < 1: logger().warning( f'{self!r} ImageJ series metadata invalid or corrupted file' ) return None if channels == 1: images = frames * slices elif page.shaped[0] > 1 and page.shaped[0] == channels: # Bio-Formats declares separate samples as channels images = frames * slices elif images == frames * slices and page.shaped[4] == channels: # RGB contig samples declared as channel channels = 1 else: images = frames * slices * channels if images == 1 and pages.is_multipage: images = len(pages) nbytes = images * page.nbytes # ImageJ virtual hyperstacks store all image metadata in the first # page and image data are stored contiguously before the second # page, if any if not page.is_final: isvirtual = False elif page.dataoffsets[0] + nbytes > self.filehandle.size: logger().error( f'{self!r} ImageJ series metadata invalid or corrupted file' ) return None elif images <= 1: isvirtual = True elif ( pages.is_multipage and page.dataoffsets[0] + nbytes > pages[1].offset ): # next page is not stored after data isvirtual = False else: isvirtual = True page_list: list[TiffPage | TiffFrame] if isvirtual: # no need to read other pages page_list = [page] else: page_list = pages[:] shape: tuple[int, ...] axes: str if order in {'czt', 'default'}: axes = 'TZC' shape = (frames, slices, channels) elif order == 'ctz': axes = 'ZTC' shape = (slices, frames, channels) elif order == 'zct': axes = 'TCZ' shape = (frames, channels, slices) elif order == 'ztc': axes = 'CTZ' shape = (channels, frames, slices) elif order == 'tcz': axes = 'ZCT' shape = (slices, channels, frames) elif order == 'tzc': axes = 'CZT' shape = (channels, slices, frames) else: axes = 'TZC' shape = (frames, slices, channels) logger().warning( f'{self!r} ImageJ series of unknown order {order!r}' ) remain = images // product(shape) if remain > 1: logger().debug( f'{self!r} ImageJ series contains unidentified dimension' ) shape = (remain,) + shape axes = 'I' + axes if page.shaped[0] > 1: # Bio-Formats declares separate samples as channels assert axes[-1] == 'C' shape = shape[:-1] + page.shape axes += page.axes[1:] else: shape += page.shape axes += page.axes if 'S' not in axes: shape += (1,) axes += 'S' # assert axes.endswith('TZCYXS'), axes truncated = ( isvirtual and not pages.is_multipage and page.nbytes != nbytes ) self.is_uniform = True return [ TiffPageSeries( page_list, shape, page.dtype, axes, kind='imagej', truncated=truncated, ) ] def _series_nih(self) -> list[TiffPageSeries] | None: """Return all images in NIH Image file as single series.""" series = self._series_uniform() if series is not None: for s in series: s.kind = 'nih' return series def _series_scanimage(self) -> list[TiffPageSeries] | None: """Return image series in ScanImage file.""" pages = self.pages._getlist(validate=False) page = self.pages.first dtype = page.dtype shape = None meta = self.scanimage_metadata if meta is None: framedata = {} else: framedata = meta.get('FrameData', {}) if 'SI.hChannels.channelSave' in framedata: try: channels = framedata['SI.hChannels.channelSave'] try: # channelSave is a list of channel IDs channels = len(channels) except TypeError: # channelSave is a single channel ID channels = 1 # slices = framedata.get( # 'SI.hStackManager.actualNumSlices', # framedata.get('SI.hStackManager.numSlices', None), # ) # if slices is None: # raise ValueError('unable to determine numSlices') slices = None try: frames = int(framedata['SI.hStackManager.framesPerSlice']) except Exception as exc: # framesPerSlice is inf slices = 1 if len(pages) % channels: raise ValueError( 'unable to determine framesPerSlice' ) from exc frames = len(pages) // channels if slices is None: slices = max(len(pages) // (frames * channels), 1) shape = (slices, frames, channels) + page.shape axes = 'ZTC' + page.axes except Exception as exc: logger().warning( f'{self!r} ScanImage series raised {exc!r:.128}' ) # TODO: older versions of ScanImage store non-varying frame data in # the ImageDescription tag. Candidates are scanimage.SI5.channelsSave, # scanimage.SI5.stackNumSlices, scanimage.SI5.acqNumFrames # scanimage.SI4., state.acq.numberOfFrames, state.acq.numberOfFrames... if shape is None: shape = (len(pages),) + page.shape axes = 'I' + page.axes return [TiffPageSeries(pages, shape, dtype, axes, kind='scanimage')] def _series_fluoview(self) -> list[TiffPageSeries] | None: """Return image series in FluoView file.""" meta = self.fluoview_metadata if meta is None: return None pages = self.pages._getlist(validate=False) mmhd = list(reversed(meta['Dimensions'])) axes = ''.join(TIFF.MM_DIMENSIONS.get(i[0].upper(), 'Q') for i in mmhd) shape = tuple(int(i[1]) for i in mmhd) self.is_uniform = True return [ TiffPageSeries( pages, shape, pages[0].dtype, axes, name=meta['ImageName'], kind='fluoview', ) ] def _series_mdgel(self) -> list[TiffPageSeries] | None: """Return image series in MD Gel file.""" # only a single page, scaled according to metadata in second page meta = self.mdgel_metadata if meta is None: return None transform: Callable[[NDArray[Any]], NDArray[Any]] | None self.pages.useframes = False self.pages.set_keyframe(0) if meta['FileTag'] in {2, 128}: dtype = numpy.dtype(numpy.float32) scale = meta['ScalePixel'] scale = scale[0] / scale[1] # rational if meta['FileTag'] == 2: # squary root data format def transform(a: NDArray[Any], /) -> NDArray[Any]: return a.astype(numpy.float32) ** 2 * scale else: def transform(a: NDArray[Any], /) -> NDArray[Any]: return a.astype(numpy.float32) * scale else: transform = None page = self.pages.first self.is_uniform = False return [ TiffPageSeries( [page], page.shape, dtype, page.axes, transform=transform, kind='mdgel', ) ] def _series_ndpi(self) -> list[TiffPageSeries] | None: """Return pyramidal image series in NDPI file.""" series = self._series_generic() if series is None: return None for s in series: s.kind = 'ndpi' if s.axes[0] == 'I': s._set_dimensions(s.shape, 'Z' + s.axes[1:], None, True) if s.is_pyramidal: name = s.keyframe.tags.valueof(65427) s.name = 'Baseline' if name is None else name continue mag = s.keyframe.tags.valueof(65421) if mag is not None: if mag == -1.0: s.name = 'Macro' # s.kind += '_macro' elif mag == -2.0: s.name = 'Map' # s.kind += '_map' self.is_uniform = False return series def _series_avs(self) -> list[TiffPageSeries] | None: """Return pyramidal image series in AVS file.""" series = self._series_generic() if series is None: return None if len(series) != 3: logger().warning( f'{self!r} AVS series expected 3 series, got {len(series)}' ) s = series[0] s.kind = 'avs' if s.axes[0] == 'I': s._set_dimensions(s.shape, 'Z' + s.axes[1:], None, True) if s.is_pyramidal: s.name = 'Baseline' if len(series) == 3: series[1].name = 'Map' series[1].kind = 'avs' series[2].name = 'Macro' series[2].kind = 'avs' self.is_uniform = False return series def _series_philips(self) -> list[TiffPageSeries] | None: """Return pyramidal image series in Philips DP file.""" from xml.etree import ElementTree as etree series = [] pages = self.pages pages.cache = False pages.useframes = False pages.set_keyframe(0) pages._load() meta = self.philips_metadata assert meta is not None try: tree = etree.fromstring(meta) except etree.ParseError as exc: logger().error(f'{self!r} Philips series raised {exc!r:.128}') return None pixel_spacing = [ tuple(float(v) for v in elem.text.replace('"', '').split()) for elem in tree.findall( './/*' '/DataObject[@ObjectType="PixelDataRepresentation"]' '/Attribute[@Name="DICOM_PIXEL_SPACING"]' ) if elem.text is not None ] if len(pixel_spacing) < 2: logger().error( f'{self!r} Philips series {len(pixel_spacing)=} < 2' ) return None series_dict: dict[str, list[TiffPage]] = {} series_dict['Level'] = [] series_dict['Other'] = [] for page in pages: assert isinstance(page, TiffPage) if page.description.startswith('Macro'): series_dict['Macro'] = [page] elif page.description.startswith('Label'): series_dict['Label'] = [page] elif not page.is_tiled: series_dict['Other'].append(page) else: series_dict['Level'].append(page) levels = series_dict.pop('Level') if len(levels) != len(pixel_spacing): logger().error( f'{self!r} Philips series ' f'{len(levels)=} != {len(pixel_spacing)=}' ) return None # fix padding of sublevels imagewidth0 = levels[0].imagewidth imagelength0 = levels[0].imagelength h0, w0 = pixel_spacing[0] for serie, (h, w) in zip(levels[1:], pixel_spacing[1:]): page = serie.keyframe # if page.dtype.itemsize == 1: # page.nodata = 255 imagewidth = imagewidth0 // int(round(w / w0)) imagelength = imagelength0 // int(round(h / h0)) if page.imagewidth - page.tilewidth >= imagewidth: logger().warning( f'{self!r} Philips series {page.index=} ' f'{page.imagewidth=}-{page.tilewidth=} >= {imagewidth=}' ) page.imagewidth -= page.tilewidth - 1 elif page.imagewidth < imagewidth: logger().warning( f'{self!r} Philips series {page.index=} ' f'{page.imagewidth=} < {imagewidth=}' ) else: page.imagewidth = imagewidth imagewidth = page.imagewidth if page.imagelength - page.tilelength >= imagelength: logger().warning( f'{self!r} Philips series {page.index=} ' f'{page.imagelength=}-{page.tilelength=} >= {imagelength=}' ) page.imagelength -= page.tilelength - 1 # elif page.imagelength < imagelength: # # in this case image is padded with zero else: page.imagelength = imagelength imagelength = page.imagelength if page.shaped[-1] > 1: page.shape = (imagelength, imagewidth, page.shape[-1]) elif page.shaped[0] > 1: page.shape = (page.shape[0], imagelength, imagewidth) else: page.shape = (imagelength, imagewidth) page.shaped = ( page.shaped[:2] + (imagelength, imagewidth) + page.shaped[-1:] ) series = [TiffPageSeries([levels[0]], name='Baseline', kind='philips')] for i, page in enumerate(levels[1:]): series[0].levels.append( TiffPageSeries([page], name=f'Level{i + 1}', kind='philips') ) for key, value in series_dict.items(): for page in value: series.append(TiffPageSeries([page], name=key, kind='philips')) self.is_uniform = False return series def _series_indica(self) -> list[TiffPageSeries] | None: """Return pyramidal image series in IndicaLabs file.""" # TODO: need more IndicaLabs sample files # TODO: parse indica series from XML # TODO: alpha channels in SubIFDs or main IFDs from xml.etree import ElementTree as etree series = self._series_generic() if series is None or len(series) != 1: return series try: tree = etree.fromstring(self.pages.first.description) except etree.ParseError as exc: logger().error(f'{self!r} Indica series raised {exc!r:.128}') return series channel_names = [ channel.attrib['name'] for channel in tree.iter('channel') ] for s in series: s.kind = 'indica' # TODO: identify other dimensions if s.axes[0] == 'I' and s.shape[0] == len(channel_names): s._set_dimensions(s.shape, 'C' + s.axes[1:], None, True) if s.is_pyramidal: s.name = 'Baseline' self.is_uniform = False return series def _series_sis(self) -> list[TiffPageSeries] | None: """Return image series in Olympus SIS file.""" meta = self.sis_metadata if meta is None: return None pages = self.pages._getlist(validate=False) # TODO: this fails for VSI page = pages[0] lenpages = len(pages) if 'shape' in meta and 'axes' in meta: shape = meta['shape'] + page.shape axes = meta['axes'] + page.axes else: shape = (lenpages,) + page.shape axes = 'I' + page.axes self.is_uniform = True return [TiffPageSeries(pages, shape, page.dtype, axes, kind='sis')] def _series_qpi(self) -> list[TiffPageSeries] | None: """Return image series in PerkinElmer QPI file.""" series = [] pages = self.pages pages.cache = True pages.useframes = False pages.set_keyframe(0) pages._load() page0 = self.pages.first # Baseline # TODO: get name from ImageDescription XML ifds = [] index = 0 axes = 'C' + page0.axes dtype = page0.dtype pshape = page0.shape while index < len(pages): page = pages[index] if page.shape != pshape: break ifds.append(page) index += 1 shape = (len(ifds),) + pshape series.append( TiffPageSeries( ifds, shape, dtype, axes, name='Baseline', kind='qpi' ) ) if index < len(pages): # Thumbnail page = pages[index] series.append( TiffPageSeries( [page], page.shape, page.dtype, page.axes, name='Thumbnail', kind='qpi', ) ) index += 1 if page0.is_tiled: # Resolutions while index < len(pages): pshape = (pshape[0] // 2, pshape[1] // 2) + pshape[2:] ifds = [] while index < len(pages): page = pages[index] if page.shape != pshape: break ifds.append(page) index += 1 if len(ifds) != len(series[0].pages): break shape = (len(ifds),) + pshape series[0].levels.append( TiffPageSeries( ifds, shape, dtype, axes, name='Resolution', kind='qpi' ) ) if series[0].is_pyramidal and index < len(pages): # Macro page = pages[index] series.append( TiffPageSeries( [page], page.shape, page.dtype, page.axes, name='Macro', kind='qpi', ) ) index += 1 # Label if index < len(pages): page = pages[index] series.append( TiffPageSeries( [page], page.shape, page.dtype, page.axes, name='Label', kind='qpi', ) ) self.is_uniform = False return series def _series_svs(self) -> list[TiffPageSeries] | None: """Return image series in Aperio SVS file.""" if not self.pages.first.is_tiled: return None series = [] self.pages.cache = True self.pages.useframes = False self.pages.set_keyframe(0) self.pages._load() # baseline firstpage = self.pages.first if len(self.pages) == 1: self.is_uniform = False return [ TiffPageSeries( [firstpage], firstpage.shape, firstpage.dtype, firstpage.axes, name='Baseline', kind='svs', ) ] # thumbnail page = self.pages[1] thumnail = TiffPageSeries( [page], page.shape, page.dtype, page.axes, name='Thumbnail', kind='svs', ) # resolutions and focal planes levels = {firstpage.shape: [firstpage]} index = 2 while index < len(self.pages): page = cast(TiffPage, self.pages[index]) if not page.is_tiled or page.is_reduced: break if page.shape in levels: levels[page.shape].append(page) else: levels[page.shape] = [page] index += 1 zsize = len(levels[firstpage.shape]) if not all(len(level) == zsize for level in levels.values()): logger().warning(f'{self!r} SVS series focal planes do not match') zsize = 1 baseline = TiffPageSeries( levels[firstpage.shape], (zsize,) + firstpage.shape, firstpage.dtype, 'Z' + firstpage.axes, name='Baseline', kind='svs', ) for shape, level in levels.items(): if shape == firstpage.shape: continue page = level[0] baseline.levels.append( TiffPageSeries( level, (zsize,) + page.shape, page.dtype, 'Z' + page.axes, name='Resolution', kind='svs', ) ) series.append(baseline) series.append(thumnail) # Label, Macro; subfiletype 1, 9 for _ in range(2): if index == len(self.pages): break page = self.pages[index] assert isinstance(page, TiffPage) if page.subfiletype == 9: name = 'Macro' else: name = 'Label' series.append( TiffPageSeries( [page], page.shape, page.dtype, page.axes, name=name, kind='svs', ) ) index += 1 self.is_uniform = False return series def _series_scn(self) -> list[TiffPageSeries] | None: """Return pyramidal image series in Leica SCN file.""" # TODO: support collections from xml.etree import ElementTree as etree scnxml = self.pages.first.description root = etree.fromstring(scnxml) series = [] self.pages.cache = True self.pages.useframes = False self.pages.set_keyframe(0) self.pages._load() for collection in root: if not collection.tag.endswith('collection'): continue for image in collection: if not image.tag.endswith('image'): continue name = image.attrib.get('name', 'Unknown') for pixels in image: if not pixels.tag.endswith('pixels'): continue resolutions: dict[int, dict[str, Any]] = {} for dimension in pixels: if not dimension.tag.endswith('dimension'): continue if int(image.attrib.get('sizeZ', 1)) > 1: raise NotImplementedError( 'SCN series: Z-Stacks not supported. ' 'Please submit a sample file.' ) sizex = int(dimension.attrib['sizeX']) sizey = int(dimension.attrib['sizeY']) c = int(dimension.attrib.get('c', 0)) z = int(dimension.attrib.get('z', 0)) r = int(dimension.attrib.get('r', 0)) ifd = int(dimension.attrib['ifd']) if r in resolutions: level = resolutions[r] if c > level['channels']: level['channels'] = c if z > level['sizez']: level['sizez'] = z level['ifds'][(c, z)] = ifd else: resolutions[r] = { 'size': [sizey, sizex], 'channels': c, 'sizez': z, 'ifds': {(c, z): ifd}, } if not resolutions: continue levels = [] for r, level in sorted(resolutions.items()): shape: tuple[int, ...] = ( level['channels'] + 1, level['sizez'] + 1, ) axes = 'CZ' ifds: list[TiffPage | TiffFrame | None] = [ None ] * product(shape) for (c, z), ifd in sorted(level['ifds'].items()): ifds[c * shape[1] + z] = self.pages[ifd] assert ifds[0] is not None axes += ifds[0].axes shape += ifds[0].shape dtype = ifds[0].dtype levels.append( TiffPageSeries( ifds, shape, dtype, axes, parent=self, name=name, kind='scn', ) ) levels[0].levels.extend(levels[1:]) series.append(levels[0]) self.is_uniform = False return series def _series_bif(self) -> list[TiffPageSeries] | None: """Return image series in Ventana/Roche BIF file.""" series = [] baseline: TiffPageSeries | None = None self.pages.cache = True self.pages.useframes = False self.pages.set_keyframe(0) self.pages._load() for page in self.pages: page = cast(TiffPage, page) if page.description[:5] == 'Label': series.append( TiffPageSeries( [page], page.shape, page.dtype, page.axes, name='Label', kind='bif', ) ) elif ( page.description == 'Thumbnail' or page.description[:11] == 'Probability' ): series.append( TiffPageSeries( [page], page.shape, page.dtype, page.axes, name='Thumbnail', kind='bif', ) ) elif 'level' not in page.description: # TODO: is this necessary? series.append( TiffPageSeries( [page], page.shape, page.dtype, page.axes, name='Unknown', kind='bif', ) ) elif baseline is None: baseline = TiffPageSeries( [page], page.shape, page.dtype, page.axes, name='Baseline', kind='bif', ) series.insert(0, baseline) else: baseline.levels.append( TiffPageSeries( [page], page.shape, page.dtype, page.axes, name='Resolution', kind='bif', ) ) logger().warning(f'{self!r} BIF series tiles are not stiched') self.is_uniform = False return series def _series_ome(self) -> list[TiffPageSeries] | None: """Return image series in OME-TIFF file(s).""" # xml.etree found to be faster than lxml from xml.etree import ElementTree as etree omexml = self.ome_metadata if omexml is None: return None try: root = etree.fromstring(omexml) except etree.ParseError as exc: # TODO: test badly encoded OME-XML logger().error(f'{self!r} OME series raised {exc!r:.128}') return None keyframe: TiffPage ifds: list[TiffPage | TiffFrame | None] size: int = -1 def load_pages(tif: TiffFile, /) -> None: tif.pages.cache = True tif.pages.useframes = True tif.pages.set_keyframe(0) tif.pages._load(None) load_pages(self) root_uuid = root.attrib.get('UUID', None) self._files = {root_uuid: self} dirname = self._fh.dirname files_missing = 0 moduloref = [] modulo: dict[str, dict[str, tuple[str, int]]] = {} series: list[TiffPageSeries] = [] for element in root: if element.tag.endswith('BinaryOnly'): # TODO: load OME-XML from master or companion file logger().debug( f'{self!r} OME series is BinaryOnly, ' 'not an OME-TIFF master file' ) break if element.tag.endswith('StructuredAnnotations'): for annot in element: if not annot.attrib.get('Namespace', '').endswith( 'modulo' ): continue modulo[annot.attrib['ID']] = mod = {} for value in annot: for modulo_ns in value: for along in modulo_ns: if not along.tag[:-1].endswith('Along'): continue axis = along.tag[-1] newaxis = along.attrib.get('Type', 'other') newaxis = TIFF.AXES_CODES[newaxis] if 'Start' in along.attrib: step = float(along.attrib.get('Step', 1)) start = float(along.attrib['Start']) stop = float(along.attrib['End']) + step labels = len( numpy.arange(start, stop, step) ) else: labels = len( [ label for label in along if label.tag.endswith('Label') ] ) mod[axis] = (newaxis, labels) if not element.tag.endswith('Image'): continue for annot in element: if annot.tag.endswith('AnnotationRef'): annotationref = annot.attrib['ID'] break else: annotationref = None attr = element.attrib name = attr.get('Name', None) for pixels in element: if not pixels.tag.endswith('Pixels'): continue attr = pixels.attrib # dtype = attr.get('PixelType', None) axes = ''.join(reversed(attr['DimensionOrder'])) shape = [int(attr['Size' + ax]) for ax in axes] ifds = [] spp = 1 # samples per pixel first = True for data in pixels: if data.tag.endswith('Channel'): attr = data.attrib if first: first = False spp = int(attr.get('SamplesPerPixel', spp)) if spp > 1: # correct channel dimension for spp shape = [ shape[i] // spp if ax == 'C' else shape[i] for i, ax in enumerate(axes) ] elif int(attr.get('SamplesPerPixel', 1)) != spp: raise ValueError( 'OME series cannot handle differing ' 'SamplesPerPixel' ) continue if not data.tag.endswith('TiffData'): continue attr = data.attrib ifd_index = int(attr.get('IFD', 0)) num = int(attr.get('NumPlanes', 1 if 'IFD' in attr else 0)) num = int(attr.get('PlaneCount', num)) idxs = [int(attr.get('First' + ax, 0)) for ax in axes[:-2]] try: idx = int(numpy.ravel_multi_index(idxs, shape[:-2])) except ValueError as exc: # ImageJ produces invalid ome-xml when cropping logger().warning( f'{self!r} ' 'OME series contains invalid TiffData index, ' f'raised {exc!r:.128}', ) continue for uuid in data: if not uuid.tag.endswith('UUID'): continue if ( root_uuid is None and uuid.text is not None and ( uuid.attrib.get('FileName', '').lower() == self.filename.lower() ) ): # no global UUID, use this file root_uuid = uuid.text self._files[root_uuid] = self._files[None] del self._files[None] elif uuid.text not in self._files: if not self._multifile: # abort reading multifile OME series # and fall back to generic series return [] fname = uuid.attrib['FileName'] try: if not self.filehandle.is_file: raise ValueError tif = TiffFile( os.path.join(dirname, fname), _parent=self ) load_pages(tif) except ( OSError, FileNotFoundError, ValueError, ) as exc: if files_missing == 0: logger().warning( f'{self!r} OME series failed to read ' f'{fname!r}, raised {exc!r:.128}. ' 'Missing data are zeroed' ) files_missing += 1 # assume that size is same as in previous file # if no NumPlanes or PlaneCount are given if num: size = num elif size == -1: raise ValueError( 'OME series missing ' 'NumPlanes or PlaneCount' ) from exc ifds.extend([None] * (size + idx - len(ifds))) break self._files[uuid.text] = tif tif.close() pages = self._files[uuid.text].pages try: size = num if num else len(pages) ifds.extend([None] * (size + idx - len(ifds))) for i in range(size): ifds[idx + i] = pages[ifd_index + i] except IndexError as exc: logger().warning( f'{self!r} ' 'OME series contains index out of range, ' f'raised {exc!r:.128}' ) # only process first UUID break else: # no uuid found pages = self.pages try: size = num if num else len(pages) ifds.extend([None] * (size + idx - len(ifds))) for i in range(size): ifds[idx + i] = pages[ifd_index + i] except IndexError as exc: logger().warning( f'{self!r} ' 'OME series contains index out of range, ' f'raised {exc!r:.128}' ) if not ifds or all(i is None for i in ifds): # skip images without data continue # find a keyframe for ifd in ifds: # try find a TiffPage if ifd is not None and ifd == ifd.keyframe: keyframe = cast(TiffPage, ifd) break else: # reload a TiffPage from file for i, ifd in enumerate(ifds): if ifd is not None: isclosed = ifd.parent.filehandle.closed if isclosed: ifd.parent.filehandle.open() ifd.parent.pages.set_keyframe(ifd.index) keyframe = cast( TiffPage, ifd.parent.pages[ifd.index] ) ifds[i] = keyframe if isclosed: keyframe.parent.filehandle.close() break # does the series spawn multiple files multifile = False for ifd in ifds: if ifd and ifd.parent != keyframe.parent: multifile = True break if spp > 1: if keyframe.planarconfig == 1: shape += [spp] axes += 'S' else: shape = shape[:-2] + [spp] + shape[-2:] axes = axes[:-2] + 'S' + axes[-2:] if 'S' not in axes: shape += [1] axes += 'S' # number of pages in the file might mismatch XML metadata, for # example Nikon-cell011.ome.tif or stack_t24_y2048_x2448.tiff size = max(product(shape) // keyframe.size, 1) if size < len(ifds): logger().warning( f'{self!r} ' f'OME series expected {size} frames, got {len(ifds)}' ) ifds = ifds[:size] elif size > len(ifds): logger().warning( f'{self!r} ' f'OME series is missing {size - len(ifds)} frames.' ' Missing data are zeroed' ) ifds.extend([None] * (size - len(ifds))) # FIXME: this implementation assumes the last dimensions are # stored in TIFF pages. Apparently that is not always the case. # For example, TCX (20000, 2, 500) is stored in 2 pages of # (20000, 500) in 'Image 7.ome_h00.tiff'. # For now, verify that shapes of keyframe and series match. # If not, skip series. squeezed = squeeze_axes(shape, axes)[0] if keyframe.shape != tuple(squeezed[-len(keyframe.shape) :]): logger().warning( f'{self!r} OME series cannot handle discontiguous ' f'storage ({keyframe.shape} != ' f'{tuple(squeezed[-len(keyframe.shape) :])})', ) del ifds continue # set keyframe on all IFDs # each series must contain a TiffPage used as keyframe keyframes: dict[str, TiffPage] = { keyframe.parent.filehandle.name: keyframe } for i, page in enumerate(ifds): if page is None: continue fh = page.parent.filehandle if fh.name not in keyframes: if page.keyframe != page: # reload TiffPage from file isclosed = fh.closed if isclosed: fh.open() page.parent.pages.set_keyframe(page.index) page = page.parent.pages[page.index] ifds[i] = page if isclosed: fh.close() keyframes[fh.name] = cast(TiffPage, page) if page.keyframe != page: page.keyframe = keyframes[fh.name] moduloref.append(annotationref) series.append( TiffPageSeries( ifds, shape, keyframe.dtype, axes, parent=self, name=name, multifile=multifile, kind='ome', ) ) del ifds if files_missing > 1: logger().warning( f'{self!r} OME series failed to read {files_missing} files' ) # apply modulo according to AnnotationRef for aseries, annotationref in zip(series, moduloref): if annotationref not in modulo: continue shape = list(aseries.get_shape(False)) axes = aseries.get_axes(False) for axis, (newaxis, size) in modulo[annotationref].items(): i = axes.index(axis) if shape[i] == size: axes = axes.replace(axis, newaxis, 1) else: shape[i] //= size shape.insert(i + 1, size) axes = axes.replace(axis, axis + newaxis, 1) aseries._set_dimensions(shape, axes, None) # pyramids for aseries in series: keyframe = aseries.keyframe if keyframe.subifds is None: continue if len(self._files) > 1: # TODO: support multi-file pyramids; must re-open/close logger().warning( f'{self!r} OME series cannot read multi-file pyramids' ) break for level in range(len(keyframe.subifds)): found_keyframe = False ifds = [] for page in aseries.pages: if ( page is None or page.subifds is None or page.subifds[level] < 8 ): ifds.append(None) continue page.parent.filehandle.seek(page.subifds[level]) if page.keyframe == page: ifd = keyframe = TiffPage( self, (page.index, level + 1) ) found_keyframe = True elif not found_keyframe: raise RuntimeError('no keyframe found') else: ifd = TiffFrame( self, (page.index, level + 1), keyframe=keyframe ) ifds.append(ifd) if all(ifd_or_none is None for ifd_or_none in ifds): logger().warning( f'{self!r} OME series level {level + 1} is empty' ) break # fix shape shape = list(aseries.get_shape(False)) axes = aseries.get_axes(False) for i, ax in enumerate(axes): if ax == 'X': shape[i] = keyframe.imagewidth elif ax == 'Y': shape[i] = keyframe.imagelength # add series aseries.levels.append( TiffPageSeries( ifds, tuple(shape), keyframe.dtype, axes, parent=self, name=f'level {level + 1}', kind='ome', ) ) self.is_uniform = len(series) == 1 and len(series[0].levels) == 1 return series def _series_mmstack(self) -> list[TiffPageSeries] | None: """Return series in Micro-Manager stack file(s).""" settings = self.micromanager_metadata if ( settings is None or 'Summary' not in settings or 'IndexMap' not in settings ): return None pages: list[TiffPage | TiffFrame | None] page_count: int summary = settings['Summary'] indexmap = settings['IndexMap'] indexmap = indexmap[indexmap[:, 4].argsort()] if 'MicroManagerVersion' not in summary or 'Frames' not in summary: # TODO: handle MagellanStack? return None # determine CZTR shape from indexmap; TODO: is this necessary? indexmap_shape = (numpy.max(indexmap[:, :4], axis=0) + 1).tolist() indexmap_index = {'C': 0, 'Z': 1, 'T': 2, 'R': 3} # TODO: activate this? # if 'AxisOrder' in summary: # axesorder = summary['AxisOrder'] # keys = { # 'channel': 'C', # 'z': 'Z', # 'slice': 'Z', # 'position': 'R', # 'time': 'T', # } # axes = ''.join(keys[ax] for ax in reversed(axesorder)) axes = 'TR' if summary.get('TimeFirst', True) else 'RT' axes += 'ZC' if summary.get('SlicesFirst', True) else 'CZ' keys = { 'C': 'Channels', 'Z': 'Slices', 'R': 'Positions', 'T': 'Frames', } shape = tuple( max( indexmap_shape[indexmap_index[ax]], int(summary.get(keys[ax], 1)), ) for ax in axes ) size = product(shape) indexmap_order = tuple(indexmap_index[ax] for ax in axes) def add_file(tif: TiffFile, indexmap: NDArray[Any]) -> int: # add virtual TiffFrames to pages list page_count = 0 offsets: list[int] offsets = indexmap[:, 4].tolist() # type: ignore[assignment] indices = numpy.ravel_multi_index( # type: ignore[call-overload] indexmap[:, indexmap_order].T, shape, ).tolist() keyframe = tif.pages.first filesize = tif.filehandle.size - keyframe.databytecounts[0] - 162 index: int offset: int for index, offset in zip(indices, offsets): if offset == keyframe.offset: pages[index] = keyframe page_count += 1 continue if 0 < offset <= filesize: dataoffsets = (offset + 162,) databytecounts = keyframe.databytecounts page_count += 1 else: # assume file is truncated dataoffsets = databytecounts = (0,) offset = 0 pages[index] = TiffFrame( tif, index=index, offset=offset, dataoffsets=dataoffsets, databytecounts=databytecounts, keyframe=keyframe, ) return page_count multifile = size > indexmap.shape[0] if multifile: # get multifile prefix if not self.filehandle.is_file: logger().warning( f'{self!r} MMStack multi-file series cannot be read from ' f'{self.filehandle._fh!r}' ) multifile = False elif '_MMStack' not in self.filename: logger().warning(f'{self!r} MMStack file name is invalid') multifile = False elif 'Prefix' in summary: prefix = summary['Prefix'] if not self.filename.startswith(prefix): logger().warning(f'{self!r} MMStack file name is invalid') multifile = False else: prefix = self.filename.split('_MMStack')[0] if multifile: # read other files pattern = os.path.join( self.filehandle.dirname, prefix + '_MMStack*.tif' ) filenames = glob.glob(pattern) if len(filenames) == 1: multifile = False else: pages = [None] * size page_count = add_file(self, indexmap) for fname in filenames: if self.filename == os.path.split(fname)[-1]: continue with TiffFile(fname) as tif: indexmap = read_micromanager_metadata( tif.filehandle, {'IndexMap'} )['IndexMap'] indexmap = indexmap[indexmap[:, 4].argsort()] page_count += add_file(tif, indexmap) if multifile: pass elif size > indexmap.shape[0]: # other files missing: squeeze shape old_shape = shape min_index = numpy.min(indexmap[:, :4], axis=0) max_index = numpy.max(indexmap[:, :4], axis=0) indexmap = indexmap.copy() indexmap[:, :4] -= min_index shape = tuple( j - i + 1 for i, j in zip(min_index.tolist(), max_index.tolist()) ) shape = tuple(shape[i] for i in indexmap_order) size = product(shape) pages = [None] * size page_count = add_file(self, indexmap) logger().warning( f'{self!r} MMStack series is missing files. ' f'Returning subset {shape!r} of {old_shape!r}' ) else: # single file pages = [None] * size page_count = add_file(self, indexmap) if page_count != size: logger().warning( f'{self!r} MMStack is missing {size - page_count} pages.' ' Missing data are zeroed' ) keyframe = self.pages.first return [ TiffPageSeries( pages, shape=shape + keyframe.shape, dtype=keyframe.dtype, axes=axes + keyframe.axes, # axestiled=axestiled, # axesoverlap=axesoverlap, # coords=coords, parent=self, kind='mmstack', multifile=multifile, squeeze=True, ) ] def _series_ndtiff(self) -> list[TiffPageSeries] | None: """Return series in NDTiff v2 and v3 files.""" # TODO: implement fallback for missing index file, versions 0 and 1 if not self.filehandle.is_file: logger().warning( f'{self!r} NDTiff.index not found for {self.filehandle._fh!r}' ) return None indexfile = os.path.join(self.filehandle.dirname, 'NDTiff.index') if not os.path.exists(indexfile): logger().warning(f'{self!r} NDTiff.index not found') return None keyframes: dict[str, TiffPage] = {} shape: tuple[int, ...] dims: tuple[str, ...] page: TiffPage | TiffFrame pageindex = 0 pixel_types = { 0: ('uint8', 8), # 8bit monochrome 1: ('uint16', 16), # 16bit monochrome 2: ('uint8', 8), # 8bit RGB 3: ('uint16', 10), # 10bit monochrome 4: ('uint16', 12), # 12bit monochrome 5: ('uint16', 14), # 14bit monochrome 6: ('uint16', 11), # 11bit monochrome } indices: dict[tuple[int, ...], TiffPage | TiffFrame] = {} categories: dict[str, dict[str, int]] = {} first = True for ( axes_dict, filename, dataoffset, width, height, pixeltype, compression, metaoffset, metabytecount, metacompression, ) in read_ndtiff_index(indexfile): if filename in keyframes: # create virtual frame from index pageindex += 1 # TODO keyframe = keyframes[filename] page = TiffFrame( keyframe.parent, pageindex, offset=None, # virtual frame keyframe=keyframe, dataoffsets=(dataoffset,), databytecounts=keyframe.databytecounts, ) if page.shape[:2] != (height, width): raise ValueError( 'NDTiff.index does not match TIFF shape ' f'{page.shape[:2]} != {(height, width)}' ) if compression != 0: raise ValueError( 'NDTiff.index compression {compression} not supported' ) if page.compression != 1: raise ValueError( 'NDTiff.index does not match TIFF compression ' f'{page.compression!r}' ) if pixeltype not in pixel_types: raise ValueError( f'NDTiff.index unknown pixel type {pixeltype}' ) dtype, _ = pixel_types[pixeltype] if page.dtype != dtype: raise ValueError( 'NDTiff.index pixeltype does not match TIFF dtype ' f'{page.dtype} != {dtype}' ) elif filename == self.filename: # use first page as keyframe pageindex = 0 page = self.pages.first keyframes[filename] = page else: # read keyframe from file pageindex = 0 with TiffFile( os.path.join(self.filehandle.dirname, filename) ) as tif: page = tif.pages.first keyframes[filename] = page # replace string with integer indices index: int | str if first: for axis, index in axes_dict.items(): if isinstance(index, str): categories[axis] = {index: 0} axes_dict[axis] = 0 first = False elif categories: for axis, values in categories.items(): index = axes_dict[axis] assert isinstance(index, str) if index not in values: values[index] = max(values.values()) + 1 axes_dict[axis] = values[index] indices[tuple(axes_dict.values())] = page # type: ignore[arg-type] dims = tuple(axes_dict.keys()) # indices may be negative or missing indices_array = numpy.array(list(indices.keys()), dtype=numpy.int32) min_index = numpy.min(indices_array, axis=0).tolist() max_index = numpy.max(indices_array, axis=0).tolist() shape = tuple(j - i + 1 for i, j in zip(min_index, max_index)) # change axes to match storage order order = order_axes(indices_array, squeeze=False) shape = tuple(shape[i] for i in order) dims = tuple(dims[i] for i in order) indices = { tuple(index[i] - min_index[i] for i in order): value for index, value in indices.items() } pages: list[TiffPage | TiffFrame | None] = [] for idx in numpy.ndindex(shape): pages.append(indices.get(idx, None)) keyframe = next(i for i in keyframes.values()) shape += keyframe.shape dims += keyframe.dims axes = ''.join(TIFF.AXES_CODES.get(i.lower(), 'Q') for i in dims) # TODO: support tiled axes and overlap # meta: Any = self.micromanager_metadata # if meta is None: # meta = {} # elif 'Summary' in meta: # meta = meta['Summary'] # # map axes column->x, row->y # axestiled: dict[int, int] = {} # axesoverlap: dict[int, int] = {} # if 'column' in dims: # key = dims.index('column') # axestiled[key] = keyframe.axes.index('X') # axesoverlap[key] = meta.get('GridPixelOverlapX', 0) # if 'row' in dims: # key = dims.index('row') # axestiled[key] = keyframe.axes.index('Y') # axesoverlap[key] = meta.get('GridPixelOverlapY', 0) # if all(i == 0 for i in axesoverlap.values()): # axesoverlap = {} self.is_uniform = True return [ TiffPageSeries( pages, shape=shape, dtype=keyframe.dtype, axes=axes, # axestiled=axestiled, # axesoverlap=axesoverlap, # coords=coords, parent=self, kind='ndtiff', multifile=len(keyframes) > 1, squeeze=True, ) ] def _series_stk(self) -> list[TiffPageSeries] | None: """Return series in STK file.""" meta = self.stk_metadata if meta is None: return None page = self.pages.first planes = meta['NumberPlanes'] name = meta.get('Name', '') if planes == 1: shape = (1,) + page.shape axes = 'I' + page.axes elif numpy.all(meta['ZDistance'] != 0): shape = (planes,) + page.shape axes = 'Z' + page.axes elif numpy.all(numpy.diff(meta['TimeCreated']) != 0): shape = (planes,) + page.shape axes = 'T' + page.axes else: # TODO: determine other/combinations of dimensions shape = (planes,) + page.shape axes = 'I' + page.axes self.is_uniform = True series = TiffPageSeries( [page], shape, page.dtype, axes, name=name, truncated=planes > 1, kind='stk', ) return [series] def _series_lsm(self) -> list[TiffPageSeries] | None: """Return main and thumbnail series in LSM file.""" lsmi = self.lsm_metadata if lsmi is None: return None axes = TIFF.CZ_LSMINFO_SCANTYPE[lsmi['ScanType']] if self.pages.first.planarconfig == 1: axes = axes.replace('C', '').replace('X', 'XC') elif self.pages.first.planarconfig == 2: # keep axis for `get_shape(False)` pass elif self.pages.first.samplesperpixel == 1: axes = axes.replace('C', '') if lsmi.get('DimensionP', 0) > 0: axes = 'P' + axes if lsmi.get('DimensionM', 0) > 0: axes = 'M' + axes shape = tuple(int(lsmi[TIFF.CZ_LSMINFO_DIMENSIONS[i]]) for i in axes) name = lsmi.get('Name', '') pages = self.pages._getlist(slice(0, None, 2), validate=False) dtype = pages[0].dtype series = [ TiffPageSeries(pages, shape, dtype, axes, name=name, kind='lsm') ] page = cast(TiffPage, self.pages[1]) if page.is_reduced: pages = self.pages._getlist(slice(1, None, 2), validate=False) dtype = page.dtype cp = 1 i = 0 while cp < len(pages) and i < len(shape) - 2: cp *= shape[i] i += 1 shape = shape[:i] + page.shape axes = axes[:i] + page.axes series.append( TiffPageSeries( pages, shape, dtype, axes, name=name, kind='lsm' ) ) self.is_uniform = False return series def _lsm_load_pages(self) -> None: """Read and fix all pages from LSM file.""" # cache all pages to preserve corrected values pages = self.pages pages.cache = True pages.useframes = True # use first and second page as keyframes pages.set_keyframe(1) pages.set_keyframe(0) # load remaining pages as frames pages._load(None) # fix offsets and bytecounts first # TODO: fix multiple conversions between lists and tuples self._lsm_fix_strip_offsets() self._lsm_fix_strip_bytecounts() # assign keyframes for data and thumbnail series keyframe = self.pages.first for page in pages._pages[::2]: page.keyframe = keyframe # type: ignore[union-attr] keyframe = cast(TiffPage, pages[1]) for page in pages._pages[1::2]: page.keyframe = keyframe # type: ignore[union-attr] def _lsm_fix_strip_offsets(self) -> None: """Unwrap strip offsets for LSM files greater than 4 GB. Each series and position require separate unwrapping (undocumented). """ if self.filehandle.size < 2**32: return indices: NDArray[Any] pages = self.pages npages = len(pages) series = self.series[0] axes = series.axes # find positions positions = 1 for i in 0, 1: if series.axes[i] in 'PM': positions *= series.shape[i] # make time axis first if positions > 1: ntimes = 0 for i in 1, 2: if axes[i] == 'T': ntimes = series.shape[i] break if ntimes: div, mod = divmod(npages, 2 * positions * ntimes) if mod != 0: raise RuntimeError('mod != 0') shape = (positions, ntimes, div, 2) indices = numpy.arange(product(shape)).reshape(shape) indices = numpy.moveaxis(indices, 1, 0) else: indices = numpy.arange(npages).reshape(-1, 2) else: indices = numpy.arange(npages).reshape(-1, 2) # images of reduced page might be stored first if pages[0].dataoffsets[0] > pages[1].dataoffsets[0]: indices = indices[..., ::-1] # unwrap offsets wrap = 0 previousoffset = 0 for npi in indices.flat: page = pages[int(npi)] dataoffsets = [] if all(i <= 0 for i in page.dataoffsets): logger().warning( f'{self!r} LSM file incompletely written at {page}' ) break for currentoffset in page.dataoffsets: if currentoffset < previousoffset: wrap += 2**32 dataoffsets.append(currentoffset + wrap) previousoffset = currentoffset page.dataoffsets = tuple(dataoffsets) def _lsm_fix_strip_bytecounts(self) -> None: """Set databytecounts to size of compressed data. The StripByteCounts tag in LSM files contains the number of bytes for the uncompressed data. """ if self.pages.first.compression == 1: return # sort pages by first strip offset pages = sorted(self.pages, key=lambda p: p.dataoffsets[0]) npages = len(pages) - 1 for i, page in enumerate(pages): if page.index % 2: continue offsets = page.dataoffsets bytecounts = page.databytecounts if i < npages: lastoffset = pages[i + 1].dataoffsets[0] else: # LZW compressed strips might be longer than uncompressed lastoffset = min( offsets[-1] + 2 * bytecounts[-1], self._fh.size ) bytecount_list = list(bytecounts) for j in range(len(bytecounts) - 1): bytecount_list[j] = offsets[j + 1] - offsets[j] bytecount_list[-1] = lastoffset - offsets[-1] page.databytecounts = tuple(bytecount_list) def _ndpi_load_pages(self) -> None: """Read and fix pages from NDPI slide file if CaptureMode > 6. If the value of the CaptureMode tag is greater than 6, change the attributes of TiffPage instances that are part of the pyramid to match 16-bit grayscale data. TiffTag values are not corrected. """ pages = self.pages capturemode = self.pages.first.tags.valueof(65441) if capturemode is None or capturemode < 6: return pages.cache = True pages.useframes = False pages._load() for page in pages: assert isinstance(page, TiffPage) mag = page.tags.valueof(65421) if mag is None or mag > 0: page.photometric = PHOTOMETRIC.MINISBLACK page.sampleformat = SAMPLEFORMAT.UINT page.samplesperpixel = 1 page.bitspersample = 16 page.dtype = page._dtype = numpy.dtype(numpy.uint16) if page.shaped[-1] > 1: page.axes = page.axes[:-1] page.shape = page.shape[:-1] page.shaped = page.shaped[:-1] + (1,) def __getattr__(self, name: str, /) -> bool: """Return `is_flag` attributes from first page.""" if name[3:] in TIFF.PAGE_FLAGS: if not self.pages: return False value = bool(getattr(self.pages.first, name)) setattr(self, name, value) return value raise AttributeError( f'{self.__class__.__name__!r} object has no attribute {name!r}' ) def __enter__(self) -> TiffFile: return self def __exit__(self, exc_type: Any, exc_value: Any, traceback: Any) -> None: self.close() def __repr__(self) -> str: return f'' def __str__(self) -> str: return self._str() def _str(self, detail: int = 0, width: int = 79) -> str: """Return string containing information about TiffFile. The `detail` parameter specifies the level of detail returned: 0: file only. 1: all series, first page of series and its tags. 2: large tag values and file metadata. 3: all pages. """ info_list = [ "TiffFile '{}'", format_size(self._fh.size), ( '' if byteorder_isnative(self.byteorder) else {'<': 'little-endian', '>': 'big-endian'}[self.byteorder] ), ] if self.is_bigtiff: info_list.append('BigTiff') if len(self.pages) > 1: info_list.append(f'{len(self.pages)} Pages') if len(self.series) > 1: info_list.append(f'{len(self.series)} Series') if len(self._files) > 1: info_list.append(f'{len(self._files)} Files') flags = self.flags if 'uniform' in flags and len(self.pages) == 1: flags.discard('uniform') info_list.append('|'.join(f.lower() for f in sorted(flags))) info = ' '.join(info_list) info = info.replace(' ', ' ').replace(' ', ' ') info = info.format( snipstr(self._fh.name, max(12, width + 2 - len(info))) ) if detail <= 0: return info info_list = [info] info_list.append('\n'.join(str(s) for s in self.series)) if detail >= 3: for page in self.pages: if page is None: continue info_list.append(page._str(detail=detail, width=width)) if page.pages is not None: for subifd in page.pages: info_list.append( subifd._str(detail=detail, width=width) ) elif self.series: info_list.extend( s.keyframe._str(detail=detail, width=width) for s in self.series if not s.keyframe.parent.filehandle.closed # avoid warning ) elif self.pages: # and self.pages.first: info_list.append(self.pages.first._str(detail=detail, width=width)) if detail >= 2: for name in sorted(self.flags): if hasattr(self, name + '_metadata'): m = getattr(self, name + '_metadata') if m: info_list.append( f'{name.upper()}_METADATA\n' f'{pformat(m, width=width, height=detail * 24)}' ) return '\n\n'.join(info_list).replace('\n\n\n', '\n\n') @cached_property def flags(self) -> set[str]: """Set of file flags (a potentially expensive operation).""" return { name.lower() for name in TIFF.FILE_FLAGS if getattr(self, 'is_' + name) } @cached_property def is_uniform(self) -> bool: """File contains uniform series of pages.""" # the hashes of IFDs 0, 7, and -1 are the same pages = self.pages try: page = self.pages.first except IndexError: return False if page.subifds: return False if page.is_scanimage or page.is_nih: return True i = 0 useframes = pages.useframes try: pages.useframes = False h = page.hash for i in (1, 7, -1): if pages[i].aspage().hash != h: return False except IndexError: return i == 1 # single page TIFF is uniform finally: pages.useframes = useframes return True @property def is_appendable(self) -> bool: """Pages can be appended to file without corrupting.""" # TODO: check other formats return not ( self.is_ome or self.is_lsm or self.is_stk or self.is_imagej or self.is_fluoview or self.is_micromanager ) @property def is_bigtiff(self) -> bool: """File has BigTIFF format.""" return self.tiff.is_bigtiff @cached_property def is_ndtiff(self) -> bool: """File has NDTiff format.""" # file should be accompanied by NDTiff.index meta = self.micromanager_metadata if meta is not None and meta.get('MajorVersion', 0) >= 2: self.is_uniform = True return True return False @cached_property def is_mmstack(self) -> bool: """File has Micro-Manager stack format.""" meta = self.micromanager_metadata if ( meta is not None and 'Summary' in meta and 'IndexMap' in meta and meta.get('MajorVersion', 1) == 0 # and 'MagellanStack' not in self.filename: ): self.is_uniform = True return True return False @cached_property def is_mdgel(self) -> bool: """File has MD Gel format.""" # side effect: add second page, if exists, to cache try: ismdgel = ( self.pages.first.is_mdgel or self.pages.get(1, cache=True).is_mdgel ) if ismdgel: self.is_uniform = False return ismdgel except IndexError: return False @property def is_sis(self) -> bool: """File is Olympus SIS format.""" try: return ( self.pages.first.is_sis and not self.filename.lower().endswith('.vsi') ) except IndexError: return False @cached_property def shaped_metadata(self) -> tuple[dict[str, Any], ...] | None: """Tifffile metadata from JSON formatted ImageDescription tags.""" if not self.is_shaped: return None result = [] for s in self.series: if s.kind.lower() != 'shaped': continue page = s.pages[0] if ( not isinstance(page, TiffPage) or page.shaped_description is None ): continue result.append(shaped_description_metadata(page.shaped_description)) return tuple(result) @property def ome_metadata(self) -> str | None: """OME XML metadata from ImageDescription tag.""" if not self.is_ome: return None # return xml2dict(self.pages.first.description)['OME'] if self._omexml: return self._omexml return self.pages.first.description @property def scn_metadata(self) -> str | None: """Leica SCN XML metadata from ImageDescription tag.""" if not self.is_scn: return None return self.pages.first.description @property def philips_metadata(self) -> str | None: """Philips DP XML metadata from ImageDescription tag.""" if not self.is_philips: return None return self.pages.first.description @property def indica_metadata(self) -> str | None: """IndicaLabs XML metadata from ImageDescription tag.""" if not self.is_indica: return None return self.pages.first.description @property def avs_metadata(self) -> str | None: """Argos AVS XML metadata from tag 65000.""" if not self.is_avs: return None return self.pages.first.tags.valueof(65000) @property def lsm_metadata(self) -> dict[str, Any] | None: """LSM metadata from CZ_LSMINFO tag.""" if not self.is_lsm: return None return self.pages.first.tags.valueof(34412) # CZ_LSMINFO @cached_property def stk_metadata(self) -> dict[str, Any] | None: """STK metadata from UIC tags.""" if not self.is_stk: return None page = self.pages.first tags = page.tags result: dict[str, Any] = {} if page.description: result['PlaneDescriptions'] = page.description.split('\x00') tag = tags.get(33629) # UIC2tag result['NumberPlanes'] = 1 if tag is None else tag.count value = tags.valueof(33628) # UIC1tag if value is not None: result.update(value) value = tags.valueof(33630) # UIC3tag if value is not None: result.update(value) # wavelengths value = tags.valueof(33631) # UIC4tag if value is not None: result.update(value) # override UIC1 tags uic2tag = tags.valueof(33629) if uic2tag is not None: result['ZDistance'] = uic2tag['ZDistance'] result['TimeCreated'] = uic2tag['TimeCreated'] result['TimeModified'] = uic2tag['TimeModified'] for key in ('Created', 'Modified'): try: result['Datetime' + key] = numpy.array( [ julian_datetime(*dt) for dt in zip( uic2tag['Date' + key], uic2tag['Time' + key] ) ], dtype='datetime64[ns]', ) except Exception as exc: result['Datetime' + key] = None logger().warning( f'{self!r} STK Datetime{key} raised {exc!r:.128}' ) return result @cached_property def imagej_metadata(self) -> dict[str, Any] | None: """ImageJ metadata from ImageDescription and IJMetadata tags.""" if not self.is_imagej: return None page = self.pages.first if page.imagej_description is None: return None result = imagej_description_metadata(page.imagej_description) value = page.tags.valueof(50839) # IJMetadata if value is not None: try: result.update(value) except Exception: pass return result @cached_property def fluoview_metadata(self) -> dict[str, Any] | None: """FluoView metadata from MM_Header and MM_Stamp tags.""" if not self.is_fluoview: return None result = {} page = self.pages.first value = page.tags.valueof(34361) # MM_Header if value is not None: result.update(value) # TODO: read stamps from all pages value = page.tags.valueof(34362) # MM_Stamp if value is not None: result['Stamp'] = value # skip parsing image description; not reliable # try: # t = fluoview_description_metadata(page.image_description) # if t is not None: # result['ImageDescription'] = t # except Exception as exc: # logger().warning( # f'{self!r} ' # f'raised {exc!r:.128}' # ) return result @property def nih_metadata(self) -> dict[str, Any] | None: """NIHImage metadata from NIHImageHeader tag.""" if not self.is_nih: return None return self.pages.first.tags.valueof(43314) # NIHImageHeader @property def fei_metadata(self) -> dict[str, Any] | None: """FEI metadata from SFEG or HELIOS tags.""" if not self.is_fei: return None tags = self.pages.first.tags result = {} try: result.update(tags.valueof(34680)) # FEI_SFEG except Exception: pass try: result.update(tags.valueof(34682)) # FEI_HELIOS except Exception: pass return result @property def sem_metadata(self) -> dict[str, Any] | None: """SEM metadata from CZ_SEM tag.""" if not self.is_sem: return None return self.pages.first.tags.valueof(34118) @property def sis_metadata(self) -> dict[str, Any] | None: """Olympus SIS metadata from OlympusSIS and OlympusINI tags.""" if not self.pages.first.is_sis: return None tags = self.pages.first.tags result = {} try: result.update(tags.valueof(33471)) # OlympusINI except Exception: pass try: result.update(tags.valueof(33560)) # OlympusSIS except Exception: pass return result if result else None @cached_property def mdgel_metadata(self) -> dict[str, Any] | None: """MD-GEL metadata from MDFileTag tags.""" if not self.is_mdgel: return None if 33445 in self.pages.first.tags: tags = self.pages.first.tags else: page = cast(TiffPage, self.pages[1]) if 33445 in page.tags: tags = page.tags else: return None result = {} for code in range(33445, 33453): if code not in tags: continue name = TIFF.TAGS[code] result[name[2:]] = tags.valueof(code) return result @property def andor_metadata(self) -> dict[str, Any] | None: """Andor metadata from Andor tags.""" return self.pages.first.andor_tags @property def epics_metadata(self) -> dict[str, Any] | None: """EPICS metadata from areaDetector tags.""" return self.pages.first.epics_tags @property def tvips_metadata(self) -> dict[str, Any] | None: """TVIPS metadata from tag.""" if not self.is_tvips: return None return self.pages.first.tags.valueof(37706) @cached_property def metaseries_metadata(self) -> dict[str, Any] | None: """MetaSeries metadata from ImageDescription tag of first tag.""" # TODO: remove this? It is a per page property if not self.is_metaseries: return None return metaseries_description_metadata(self.pages.first.description) @cached_property def pilatus_metadata(self) -> dict[str, Any] | None: """Pilatus metadata from ImageDescription tag.""" if not self.is_pilatus: return None return pilatus_description_metadata(self.pages.first.description) @cached_property def micromanager_metadata(self) -> dict[str, Any] | None: """Non-TIFF Micro-Manager metadata.""" if not self.is_micromanager: return None return read_micromanager_metadata(self._fh) @cached_property def gdal_structural_metadata(self) -> dict[str, Any] | None: """Non-TIFF GDAL structural metadata.""" return read_gdal_structural_metadata(self._fh) @cached_property def scanimage_metadata(self) -> dict[str, Any] | None: """ScanImage non-varying frame and ROI metadata. The returned dict may contain 'FrameData', 'RoiGroups', and 'version' keys. Varying frame data can be found in the ImageDescription tags. """ if not self.is_scanimage: return None result: dict[str, Any] = {} try: framedata, roidata, version = read_scanimage_metadata(self._fh) result['version'] = version result['FrameData'] = framedata result.update(roidata) except ValueError: pass return result @property def geotiff_metadata(self) -> dict[str, Any] | None: """GeoTIFF metadata from tags.""" if not self.is_geotiff: return None return self.pages.first.geotiff_tags @property def gdal_metadata(self) -> dict[str, Any] | None: """GDAL XML metadata from GDAL_METADATA tag.""" if not self.is_gdal: return None return self.pages.first.tags.valueof(42112) @cached_property def astrotiff_metadata(self) -> dict[str, Any] | None: """AstroTIFF metadata from ImageDescription tag.""" if not self.is_astrotiff: return None return astrotiff_description_metadata(self.pages.first.description) @cached_property def streak_metadata(self) -> dict[str, Any] | None: """Hamamatsu streak metadata from ImageDescription tag.""" if not self.is_streak: return None return streak_description_metadata( self.pages.first.description, self.filehandle ) @property def eer_metadata(self) -> str | None: """EER AcquisitionMetadata XML from tag 65001.""" if not self.is_eer: return None value = self.pages.first.tags.valueof(65001) return None if value is None else value.decode() @final class TiffFormat: """TIFF format properties.""" __slots__ = ( 'version', 'byteorder', 'offsetsize', 'offsetformat', 'tagnosize', 'tagnoformat', 'tagsize', 'tagformat1', 'tagformat2', 'tagoffsetthreshold', '_hash', ) version: int """Version of TIFF header.""" byteorder: Literal['>', '<'] """Byteorder of TIFF header.""" offsetsize: int """Size of offsets.""" offsetformat: str """Struct format for offset values.""" tagnosize: int """Size of `tagnoformat`.""" tagnoformat: str """Struct format for number of TIFF tags.""" tagsize: int """Size of `tagformat1` and `tagformat2`.""" tagformat1: str """Struct format for code and dtype of TIFF tag.""" tagformat2: str """Struct format for count and value of TIFF tag.""" tagoffsetthreshold: int """Size of inline tag values.""" _hash: int def __init__( self, version: int, byteorder: Literal['>', '<'], offsetsize: int, offsetformat: str, tagnosize: int, tagnoformat: str, tagsize: int, tagformat1: str, tagformat2: str, tagoffsetthreshold: int, ) -> None: self.version = version self.byteorder = byteorder self.offsetsize = offsetsize self.offsetformat = offsetformat self.tagnosize = tagnosize self.tagnoformat = tagnoformat self.tagsize = tagsize self.tagformat1 = tagformat1 self.tagformat2 = tagformat2 self.tagoffsetthreshold = tagoffsetthreshold self._hash = hash((version, byteorder, offsetsize)) @property def is_bigtiff(self) -> bool: """Format is 64-bit BigTIFF.""" return self.version == 43 @property def is_ndpi(self) -> bool: """Format is 32-bit TIFF with 64-bit offsets used by NDPI.""" return self.version == 42 and self.offsetsize == 8 def __hash__(self) -> int: return self._hash def __repr__(self) -> str: bits = '32' if self.version == 42 else '64' endian = 'little' if self.byteorder == '<' else 'big' ndpi = ' with 64-bit offsets' if self.is_ndpi else '' return f'' def __str__(self) -> str: return indent( repr(self), *( f'{attr}: {getattr(self, attr)!r}' for attr in TiffFormat.__slots__ ), ) @final class TiffPage: """TIFF image file directory (IFD). TiffPage instances are not thread-safe. All attributes are read-only. Parameters: parent: TiffFile instance to read page from. The file handle position must be at an offset to an IFD structure. index: Index of page in IFD tree. keyframe: Not used. Raises: TiffFileError: Invalid TIFF structure. """ # instance attributes tags: TiffTags """Tags belonging to page.""" parent: TiffFile """TiffFile instance page belongs to.""" offset: int """Position of page in file.""" shape: tuple[int, ...] """Shape of image array in page.""" dtype: numpy.dtype[Any] | None """Data type of image array in page.""" shaped: tuple[int, int, int, int, int] """Normalized 5-dimensional shape of image array in page: 0. separate samplesperpixel or 1. 1. imagedepth or 1. 2. imagelength. 3. imagewidth. 4. contig samplesperpixel or 1. """ axes: str """Character codes for dimensions in image array: 'S' sample, 'X' width, 'Y' length, 'Z' depth. """ dataoffsets: tuple[int, ...] """Positions of strips or tiles in file.""" databytecounts: tuple[int, ...] """Size of strips or tiles in file.""" _dtype: numpy.dtype[Any] | None _index: tuple[int, ...] # index of page in IFD tree # default properties; might be updated from tags subfiletype: int = 0 """:py:class:`FILETYPE` kind of image.""" imagewidth: int = 0 """Number of columns (pixels per row) in image.""" imagelength: int = 0 """Number of rows in image.""" imagedepth: int = 1 """Number of Z slices in image.""" tilewidth: int = 0 """Number of columns in each tile.""" tilelength: int = 0 """Number of rows in each tile.""" tiledepth: int = 1 """Number of Z slices in each tile.""" samplesperpixel: int = 1 """Number of components per pixel.""" bitspersample: int = 1 """Number of bits per pixel component.""" sampleformat: int = 1 """:py:class:`SAMPLEFORMAT` type of pixel components.""" rowsperstrip: int = 2**32 - 1 """Number of rows per strip.""" compression: int = 1 """:py:class:`COMPRESSION` scheme used on image data.""" planarconfig: int = 1 """:py:class:`PLANARCONFIG` type of storage of components in pixel.""" fillorder: int = 1 """Logical order of bits within byte of image data.""" photometric: int = 0 """:py:class:`PHOTOMETRIC` color space of image.""" predictor: int = 1 """:py:class:`PREDICTOR` applied to image data before compression.""" extrasamples: tuple[int, ...] = () """:py:class:`EXTRASAMPLE` interpretation of extra components in pixel.""" subsampling: tuple[int, int] | None = None """Subsampling factors used for chrominance components.""" subifds: tuple[int, ...] | None = None """Positions of SubIFDs in file.""" jpegtables: bytes | None = None """JPEG quantization and Huffman tables.""" jpegheader: bytes | None = None """JPEG header for NDPI.""" software: str = '' """Software used to create image.""" description: str = '' """Subject of image.""" description1: str = '' """Value of second ImageDescription tag.""" nodata: int | float = 0 """Value used for missing data. The value of the GDAL_NODATA tag or 0.""" def __init__( self, parent: TiffFile, /, index: int | Sequence[int], *, keyframe: TiffPage | None = None, ) -> None: tag: TiffTag | None tiff = parent.tiff self.parent = parent self.shape = () self.shaped = (0, 0, 0, 0, 0) self.dtype = self._dtype = None self.axes = '' self.tags = tags = TiffTags() self.dataoffsets = () self.databytecounts = () if isinstance(index, int): self._index = (index,) else: self._index = tuple(index) # read IFD structure and its tags from file fh = parent.filehandle self.offset = fh.tell() # offset to this IFD try: tagno: int = struct.unpack( tiff.tagnoformat, fh.read(tiff.tagnosize) )[0] if tagno > 4096: raise ValueError(f'suspicious number of tags {tagno}') except Exception as exc: raise TiffFileError(f'corrupted tag list @{self.offset}') from exc tagoffset = self.offset + tiff.tagnosize # fh.tell() tagsize = tagsize_ = tiff.tagsize data = fh.read(tagsize * tagno) if len(data) != tagsize * tagno: raise TiffFileError('corrupted IFD structure') if tiff.is_ndpi: # patch offsets/values for 64-bit NDPI file tagsize = 16 fh.seek(8, os.SEEK_CUR) ext = fh.read(4 * tagno) # high bits data = b''.join( data[i * 12 : i * 12 + 12] + ext[i * 4 : i * 4 + 4] for i in range(tagno) ) tagindex = -tagsize for i in range(tagno): tagindex += tagsize tagdata = data[tagindex : tagindex + tagsize] try: tag = TiffTag.fromfile( parent, offset=tagoffset + i * tagsize_, header=tagdata ) except TiffFileError as exc: logger().error(f' raised {exc!r:.128}') continue tags.add(tag) if not tags: return # found in FIBICS for code, name in TIFF.TAG_ATTRIBUTES.items(): value = tags.valueof(code) if value is None: continue if code in {270, 305} and not isinstance(value, str): # wrong string type for software or description continue setattr(self, name, value) value = tags.valueof(270, index=1) if isinstance(value, str): self.description1 = value if self.subfiletype == 0: value = tags.valueof(255) # SubfileType if value == 2: self.subfiletype = 0b1 # reduced image elif value == 3: self.subfiletype = 0b10 # multi-page elif not isinstance(self.subfiletype, int): # files created by IDEAS logger().warning(f'{self!r} invalid {self.subfiletype=}') self.subfiletype = 0 # consolidate private tags; remove them from self.tags # if self.is_andor: # self.andor_tags # elif self.is_epics: # self.epics_tags # elif self.is_ndpi: # self.ndpi_tags # if self.is_sis and 34853 in tags: # # TODO: cannot change tag.name # tags[34853].name = 'OlympusSIS2' # dataoffsets and databytecounts # TileOffsets self.dataoffsets = tags.valueof(324) if self.dataoffsets is None: # StripOffsets self.dataoffsets = tags.valueof(273) if self.dataoffsets is None: # JPEGInterchangeFormat et al. self.dataoffsets = tags.valueof(513) if self.dataoffsets is None: self.dataoffsets = () logger().error(f'{self!r} missing data offset tag') # TileByteCounts self.databytecounts = tags.valueof(325) if self.databytecounts is None: # StripByteCounts self.databytecounts = tags.valueof(279) if self.databytecounts is None: # JPEGInterchangeFormatLength et al. self.databytecounts = tags.valueof(514) if ( self.imagewidth == 0 and self.imagelength == 0 and self.dataoffsets and self.databytecounts ): # dimensions may be missing in some RAW formats # read dimensions from assumed JPEG encoded segment try: fh.seek(self.dataoffsets[0]) ( precision, imagelength, imagewidth, samplesperpixel, ) = jpeg_shape(fh.read(min(self.databytecounts[0], 4096))) except Exception: pass else: self.imagelength = imagelength self.imagewidth = imagewidth self.samplesperpixel = samplesperpixel if 258 not in tags: self.bitspersample = 8 if precision <= 8 else 16 if 262 not in tags and samplesperpixel == 3: self.photometric = PHOTOMETRIC.YCBCR if 259 not in tags: self.compression = COMPRESSION.OJPEG if 278 not in tags: self.rowsperstrip = imagelength elif self.compression == 6: # OJPEG hack. See libtiff v4.2.0 tif_dirread.c#L4082 if 262 not in tags: # PhotometricInterpretation missing self.photometric = PHOTOMETRIC.YCBCR elif self.photometric == 2: # RGB -> YCbCr self.photometric = PHOTOMETRIC.YCBCR if 258 not in tags: # BitsPerSample missing self.bitspersample = 8 if 277 not in tags: # SamplesPerPixel missing if self.photometric in {2, 6}: self.samplesperpixel = 3 elif self.photometric in {0, 1}: self.samplesperpixel = 3 elif self.is_lsm or (self.index != 0 and self.parent.is_lsm): # correct non standard LSM bitspersample tags tags[258]._fix_lsm_bitspersample() if self.compression == 1 and self.predictor != 1: # work around bug in LSM510 software self.predictor = PREDICTOR.NONE elif self.is_vista or (self.index != 0 and self.parent.is_vista): # ISS Vista writes wrong ImageDepth tag self.imagedepth = 1 elif self.is_stk: # read UIC1tag again now that plane count is known tag = tags.get(33628) # UIC1tag assert tag is not None fh.seek(tag.valueoffset) uic2tag = tags.get(33629) # UIC2tag try: tag.value = read_uic1tag( fh, tiff.byteorder, tag.dtype, tag.count, 0, planecount=uic2tag.count if uic2tag is not None else 1, ) except Exception as exc: logger().warning( f'{self!r} raised {exc!r:.128}' ) tag = tags.get(50839) if tag is not None: # decode IJMetadata tag try: tag.value = imagej_metadata( tag.value, tags[50838].value, # IJMetadataByteCounts tiff.byteorder, ) except Exception as exc: logger().warning( f'{self!r} raised {exc!r:.128}' ) # BitsPerSample value = tags.valueof(258) if value is not None: if self.bitspersample != 1: pass # bitspersample was set by ojpeg hack elif tags[258].count == 1: self.bitspersample = int(value) else: # LSM might list more items than samplesperpixel value = value[: self.samplesperpixel] if any(v - value[0] for v in value): self.bitspersample = value else: self.bitspersample = int(value[0]) # SampleFormat value = tags.valueof(339) if value is not None: if tags[339].count == 1: try: self.sampleformat = SAMPLEFORMAT(value) except ValueError: self.sampleformat = int(value) else: value = value[: self.samplesperpixel] if any(v - value[0] for v in value): try: self.sampleformat = SAMPLEFORMAT(value) except ValueError: self.sampleformat = int(value) else: try: self.sampleformat = SAMPLEFORMAT(value[0]) except ValueError: self.sampleformat = int(value[0]) elif self.bitspersample == 32 and ( self.is_indica or (self.index != 0 and self.parent.is_indica) ): # IndicaLabsImageWriter does not write SampleFormat tag self.sampleformat = SAMPLEFORMAT.IEEEFP if 322 in tags: # TileWidth self.rowsperstrip = 0 elif 257 in tags: # ImageLength if 278 not in tags or tags[278].count > 1: # RowsPerStrip self.rowsperstrip = self.imagelength self.rowsperstrip = min(self.rowsperstrip, self.imagelength) # self.stripsperimage = int(math.floor( # float(self.imagelength + self.rowsperstrip - 1) / # self.rowsperstrip)) # determine dtype dtypestr = TIFF.SAMPLE_DTYPES.get( (self.sampleformat, self.bitspersample), None ) if dtypestr is not None: dtype = numpy.dtype(dtypestr) else: dtype = None self.dtype = self._dtype = dtype # determine shape of data imagelength = self.imagelength imagewidth = self.imagewidth imagedepth = self.imagedepth samplesperpixel = self.samplesperpixel if self.photometric == 2 or samplesperpixel > 1: # PHOTOMETRIC.RGB if self.planarconfig == 1: self.shaped = ( 1, imagedepth, imagelength, imagewidth, samplesperpixel, ) if imagedepth == 1: self.shape = (imagelength, imagewidth, samplesperpixel) self.axes = 'YXS' else: self.shape = ( imagedepth, imagelength, imagewidth, samplesperpixel, ) self.axes = 'ZYXS' else: self.shaped = ( samplesperpixel, imagedepth, imagelength, imagewidth, 1, ) if imagedepth == 1: self.shape = (samplesperpixel, imagelength, imagewidth) self.axes = 'SYX' else: self.shape = ( samplesperpixel, imagedepth, imagelength, imagewidth, ) self.axes = 'SZYX' else: self.shaped = (1, imagedepth, imagelength, imagewidth, 1) if imagedepth == 1: self.shape = (imagelength, imagewidth) self.axes = 'YX' else: self.shape = (imagedepth, imagelength, imagewidth) self.axes = 'ZYX' if not self.databytecounts: self.databytecounts = ( product(self.shape) * (self.bitspersample // 8), ) if self.compression != 1: logger().error(f'{self!r} missing ByteCounts tag') if imagelength and self.rowsperstrip and not self.is_lsm: # fix incorrect number of strip bytecounts and offsets maxstrips = ( int( math.floor(imagelength + self.rowsperstrip - 1) / self.rowsperstrip ) * self.imagedepth ) if self.planarconfig == 2: maxstrips *= self.samplesperpixel if maxstrips != len(self.databytecounts): logger().error( f'{self!r} incorrect StripByteCounts count ' f'({len(self.databytecounts)} != {maxstrips})' ) self.databytecounts = self.databytecounts[:maxstrips] if maxstrips != len(self.dataoffsets): logger().error( f'{self!r} incorrect StripOffsets count ' f'({len(self.dataoffsets)} != {maxstrips})' ) self.dataoffsets = self.dataoffsets[:maxstrips] value = tags.valueof(42113) # GDAL_NODATA if value is not None and dtype is not None: try: pytype = type(dtype.type(0).item()) value = value.replace(',', '.') # comma decimal separator self.nodata = pytype(value) if not numpy.can_cast( numpy.min_scalar_type(self.nodata), self.dtype ): raise ValueError( f'{self.nodata} is not castable to {self.dtype}' ) except Exception as exc: logger().warning( f'{self!r} parsing GDAL_NODATA tag raised {exc!r:.128}' ) self.nodata = 0 mcustarts = tags.valueof(65426) if mcustarts is not None and self.is_ndpi: # use NDPI JPEG McuStarts as tile offsets mcustarts = mcustarts.astype(numpy.int64) high = tags.valueof(65432) if high is not None: # McuStartsHighBytes high = high.astype(numpy.uint64) high <<= 32 mcustarts += high.astype(numpy.int64) fh.seek(self.dataoffsets[0]) jpegheader = fh.read(mcustarts[0]) try: ( self.tilelength, self.tilewidth, self.jpegheader, ) = ndpi_jpeg_tile(jpegheader) except ValueError as exc: logger().warning( f'{self!r} raised {exc!r:.128}' ) else: # TODO: optimize tuple(ndarray.tolist()) databytecounts = numpy.diff( mcustarts, append=self.databytecounts[0] ) self.databytecounts = tuple( databytecounts.tolist() # type: ignore[arg-type] ) mcustarts += self.dataoffsets[0] self.dataoffsets = tuple(mcustarts.tolist()) @cached_property def decode( self, ) -> Callable[ ..., tuple[ NDArray[Any] | None, tuple[int, int, int, int, int], tuple[int, int, int, int], ], ]: """Return decoded segment, its shape, and indices in image. The decode function is implemented as a closure and has the following signature: Parameters: data (Union[bytes, None]): Encoded bytes of segment (strip or tile) or None for empty segments. index (int): Index of segment in Offsets and Bytecount tag values. jpegtables (Optional[bytes]): For JPEG compressed segments only, value of JPEGTables tag if any. Returns: - Decoded segment or None for empty segments. - Position of segment in image array of normalized shape (separate sample, depth, length, width, contig sample). - Shape of segment (depth, length, width, contig samples). The shape of strips depends on their linear index. Raises: ValueError or NotImplementedError: Decoding is not supported. TiffFileError: Invalid TIFF structure. """ if self.hash in self.parent._parent._decoders: return self.parent._parent._decoders[self.hash] def cache(decode, /): self.parent._parent._decoders[self.hash] = decode return decode if self.dtype is None or self._dtype is None: def decode_raise_dtype(*args, **kwargs): raise ValueError( 'data type not supported ' f'(SampleFormat {self.sampleformat}, ' f'{self.bitspersample}-bit)' ) return cache(decode_raise_dtype) if 0 in self.shaped: def decode_raise_empty(*args, **kwargs): raise ValueError('empty image') return cache(decode_raise_empty) try: if self.compression == 1: decompress = None else: decompress = TIFF.DECOMPRESSORS[self.compression] if ( self.compression in {65000, 65001, 65002} and not self.parent.is_eer ): raise KeyError(self.compression) except KeyError as exc: def decode_raise_compression(*args, exc=str(exc)[1:-1], **kwargs): raise ValueError(f'{exc}') return cache(decode_raise_compression) try: if self.predictor == 1: unpredict = None else: unpredict = TIFF.UNPREDICTORS[self.predictor] except KeyError as exc: if self.compression in TIFF.IMAGE_COMPRESSIONS: logger().warning( f'{self!r} ignoring predictor {self.predictor}' ) unpredict = None else: def decode_raise_predictor( *args, exc=str(exc)[1:-1], **kwargs ): raise ValueError(f'{exc}') return cache(decode_raise_predictor) if self.tags.get(339) is not None: tag = self.tags[339] # SampleFormat if tag.count != 1 and any(i - tag.value[0] for i in tag.value): def decode_raise_sampleformat(*args, **kwargs): raise ValueError( f'sample formats do not match {tag.value}' ) return cache(decode_raise_sampleformat) if self.is_subsampled and ( self.compression not in {6, 7, 34892, 33007} or self.planarconfig == 2 ): def decode_raise_subsampling(*args, **kwargs): raise NotImplementedError( 'chroma subsampling not supported without JPEG compression' ) return cache(decode_raise_subsampling) if self.compression == 50001 and self.samplesperpixel == 4: # WebP segments may be missing all-opaque alpha channel def decompress_webp_rgba(data, out=None): return imagecodecs.webp_decode(data, hasalpha=True, out=out) decompress = decompress_webp_rgba # normalize segments shape to [depth, length, width, contig] if self.is_tiled: stshape = ( self.tiledepth, self.tilelength, self.tilewidth, self.samplesperpixel if self.planarconfig == 1 else 1, ) else: stshape = ( 1, self.rowsperstrip, self.imagewidth, self.samplesperpixel if self.planarconfig == 1 else 1, ) stdepth, stlength, stwidth, samples = stshape _, imdepth, imlength, imwidth, samples = self.shaped if self.is_tiled: width = (imwidth + stwidth - 1) // stwidth length = (imlength + stlength - 1) // stlength depth = (imdepth + stdepth - 1) // stdepth def indices( segmentindex: int, / ) -> tuple[ tuple[int, int, int, int, int], tuple[int, int, int, int] ]: # return indices and shape of tile in image array return ( ( segmentindex // (width * length * depth), (segmentindex // (width * length)) % depth * stdepth, (segmentindex // width) % length * stlength, segmentindex % width * stwidth, 0, ), stshape, ) def reshape( data: NDArray[Any], indices: tuple[int, int, int, int, int], shape: tuple[int, int, int, int], /, ) -> NDArray[Any]: # return reshaped tile or raise TiffFileError size = shape[0] * shape[1] * shape[2] * shape[3] if data.ndim == 1 and data.size > size: # decompression / unpacking might return too many bytes data = data[:size] if data.size == size: # complete tile # data might be non-contiguous; cannot reshape inplace return data.reshape(shape) try: # data fills remaining space # found in JPEG/PNG compressed tiles return data.reshape( ( min(imdepth - indices[1], shape[0]), min(imlength - indices[2], shape[1]), min(imwidth - indices[3], shape[2]), samples, ) ) except ValueError: pass try: # data fills remaining horizontal space # found in tiled GeoTIFF return data.reshape( ( min(imdepth - indices[1], shape[0]), min(imlength - indices[2], shape[1]), shape[2], samples, ) ) except ValueError: pass raise TiffFileError( f'corrupted tile @ {indices} cannot be reshaped from ' f'{data.shape} to {shape}' ) def pad( data: NDArray[Any], shape: tuple[int, int, int, int], / ) -> tuple[NDArray[Any], tuple[int, int, int, int]]: # pad tile to shape if data.shape == shape: return data, shape padwidth = [(0, i - j) for i, j in zip(shape, data.shape)] data = numpy.pad(data, padwidth, constant_values=self.nodata) return data, shape def pad_none( shape: tuple[int, int, int, int], / ) -> tuple[int, int, int, int]: # return shape of tile return shape else: # strips length = (imlength + stlength - 1) // stlength def indices( segmentindex: int, / ) -> tuple[ tuple[int, int, int, int, int], tuple[int, int, int, int] ]: # return indices and shape of strip in image array indices = ( segmentindex // (length * imdepth), (segmentindex // length) % imdepth * stdepth, segmentindex % length * stlength, 0, 0, ) shape = ( stdepth, min(stlength, imlength - indices[2]), stwidth, samples, ) return indices, shape def reshape( data: NDArray[Any], indices: tuple[int, int, int, int, int], shape: tuple[int, int, int, int], /, ) -> NDArray[Any]: # return reshaped strip or raise TiffFileError size = shape[0] * shape[1] * shape[2] * shape[3] if data.ndim == 1 and data.size > size: # decompression / unpacking might return too many bytes data = data[:size] if data.size == size: # expected size try: data.shape = shape except AttributeError: # incompatible shape for in-place modification # decoder returned non-contiguous array data = data.reshape(shape) return data datashape = data.shape try: # too many rows? data.shape = shape[0], -1, shape[2], shape[3] data = data[:, : shape[1]] data.shape = shape return data except ValueError: pass raise TiffFileError( 'corrupted strip cannot be reshaped from ' f'{datashape} to {shape}' ) def pad( data: NDArray[Any], shape: tuple[int, int, int, int], / ) -> tuple[NDArray[Any], tuple[int, int, int, int]]: # pad strip length to rowsperstrip shape = (shape[0], stlength, shape[2], shape[3]) if data.shape == shape: return data, shape padwidth = [ (0, 0), (0, stlength - data.shape[1]), (0, 0), (0, 0), ] data = numpy.pad(data, padwidth, constant_values=self.nodata) return data, shape def pad_none( shape: tuple[int, int, int, int], / ) -> tuple[int, int, int, int]: # return shape of strip return (shape[0], stlength, shape[2], shape[3]) if self.compression in {6, 7, 34892, 33007}: # JPEG needs special handling if self.fillorder == 2: logger().debug(f'{self!r} disabling LSB2MSB for JPEG') if unpredict: logger().debug(f'{self!r} disabling predictor for JPEG') if 28672 in self.tags: # SonyRawFileType logger().warning( f'{self!r} SonyRawFileType might need additional ' 'unpacking (see issue #95)' ) colorspace, outcolorspace = jpeg_decode_colorspace( self.photometric, self.planarconfig, self.extrasamples, self.is_jfif, ) def decode_jpeg( data: bytes | None, index: int, /, *, jpegtables: bytes | None = None, jpegheader: bytes | None = None, _fullsize: bool = False, ) -> tuple[ NDArray[Any] | None, tuple[int, int, int, int, int], tuple[int, int, int, int], ]: # return decoded segment, its shape, and indices in image segmentindex, shape = indices(index) if data is None: if _fullsize: shape = pad_none(shape) return data, segmentindex, shape data_array: NDArray[Any] = imagecodecs.jpeg_decode( data, bitspersample=self.bitspersample, tables=jpegtables, header=jpegheader, colorspace=colorspace, outcolorspace=outcolorspace, shape=shape[1:3], ) data_array = reshape(data_array, segmentindex, shape) if _fullsize: data_array, shape = pad(data_array, shape) return data_array, segmentindex, shape return cache(decode_jpeg) if self.compression in {65000, 65001, 65002}: # EER decoder requires shape and extra args if self.compression == 65002: rlebits = int(self.tags.valueof(65007, 7)) horzbits = int(self.tags.valueof(65008, 2)) vertbits = int(self.tags.valueof(65009, 2)) elif self.compression == 65001: rlebits = 7 horzbits = 2 vertbits = 2 else: rlebits = 8 horzbits = 2 vertbits = 2 def decode_eer( data: bytes | None, index: int, /, *, jpegtables: bytes | None = None, jpegheader: bytes | None = None, _fullsize: bool = False, ) -> tuple[ NDArray[Any] | None, tuple[int, int, int, int, int], tuple[int, int, int, int], ]: # return decoded eer segment, its shape, and indices in image segmentindex, shape = indices(index) if data is None: if _fullsize: shape = pad_none(shape) return data, segmentindex, shape data_array = decompress( data, shape=shape[1:3], rlebits=rlebits, horzbits=horzbits, vertbits=vertbits, superres=False, ) # type: ignore[call-arg, misc] return data_array.reshape(shape), segmentindex, shape return cache(decode_eer) if self.compression == 48124: # Jetraw requires pre-allocated output buffer def decode_jetraw( data: bytes | None, index: int, /, *, jpegtables: bytes | None = None, jpegheader: bytes | None = None, _fullsize: bool = False, ) -> tuple[ NDArray[Any] | None, tuple[int, int, int, int, int], tuple[int, int, int, int], ]: # return decoded segment, its shape, and indices in image segmentindex, shape = indices(index) if data is None: if _fullsize: shape = pad_none(shape) return data, segmentindex, shape data_array = numpy.zeros(shape, numpy.uint16) decompress(data, out=data_array) # type: ignore[misc] return data_array.reshape(shape), segmentindex, shape return cache(decode_jetraw) if self.compression in TIFF.IMAGE_COMPRESSIONS: # presume codecs always return correct dtype, native byte order... if self.fillorder == 2: logger().debug( f'{self!r} ' f'disabling LSB2MSB for compression {self.compression}' ) if unpredict: logger().debug( f'{self!r} ' f'disabling predictor for compression {self.compression}' ) def decode_image( data: bytes | None, index: int, /, *, jpegtables: bytes | None = None, jpegheader: bytes | None = None, _fullsize: bool = False, ) -> tuple[ NDArray[Any] | None, tuple[int, int, int, int, int], tuple[int, int, int, int], ]: # return decoded segment, its shape, and indices in image segmentindex, shape = indices(index) if data is None: if _fullsize: shape = pad_none(shape) return data, segmentindex, shape data_array: NDArray[Any] data_array = decompress(data) # type: ignore[misc] # del data data_array = reshape(data_array, segmentindex, shape) if _fullsize: data_array, shape = pad(data_array, shape) return data_array, segmentindex, shape return cache(decode_image) dtype = numpy.dtype(self.parent.byteorder + self._dtype.char) if self.sampleformat == 5: # complex integer if unpredict is not None: raise NotImplementedError( 'unpredicting complex integers not supported' ) itype = numpy.dtype( f'{self.parent.byteorder}i{self.bitspersample // 16}' ) ftype = numpy.dtype( f'{self.parent.byteorder}f{dtype.itemsize // 2}' ) def unpack(data: bytes, /) -> NDArray[Any]: # return complex integer as numpy.complex return numpy.frombuffer(data, itype).astype(ftype).view(dtype) elif self.bitspersample in {8, 16, 32, 64, 128}: # regular data types if (self.bitspersample * stwidth * samples) % 8: raise ValueError('data and sample size mismatch') if self.predictor in {3, 34894, 34895}: # PREDICTOR.FLOATINGPOINT # floating-point horizontal differencing decoder needs # raw byte order dtype = numpy.dtype(self._dtype.char) def unpack(data: bytes, /) -> NDArray[Any]: # return numpy array from buffer try: # read only numpy array return numpy.frombuffer(data, dtype) except ValueError: # for example, LZW strips may be missing EOI bps = self.bitspersample // 8 size = (len(data) // bps) * bps return numpy.frombuffer(data[:size], dtype) elif isinstance(self.bitspersample, tuple): # for example, RGB 565 def unpack(data: bytes, /) -> NDArray[Any]: # return numpy array from packed integers return unpack_rgb(data, dtype, self.bitspersample) elif self.bitspersample == 24 and dtype.char == 'f': # float24 if unpredict is not None: # floatpred_decode requires numpy.float24, which does not exist raise NotImplementedError('unpredicting float24 not supported') def unpack(data: bytes, /) -> NDArray[Any]: # return numpy.float32 array from float24 return imagecodecs.float24_decode( data, byteorder=self.parent.byteorder ) else: # bilevel and packed integers def unpack(data: bytes, /) -> NDArray[Any]: # return NumPy array from packed integers return imagecodecs.packints_decode( data, dtype, self.bitspersample, runlen=stwidth * samples ) def decode_other( data: bytes | None, index: int, /, *, jpegtables: bytes | None = None, jpegheader: bytes | None = None, _fullsize: bool = False, ) -> tuple[ NDArray[Any] | None, tuple[int, int, int, int, int], tuple[int, int, int, int], ]: # return decoded segment, its shape, and indices in image segmentindex, shape = indices(index) if data is None: if _fullsize: shape = pad_none(shape) return data, segmentindex, shape if self.fillorder == 2: data = imagecodecs.bitorder_decode(data) if decompress is not None: # TODO: calculate correct size for packed integers size = shape[0] * shape[1] * shape[2] * shape[3] data = decompress(data, out=size * dtype.itemsize) data_array = unpack(data) # type: ignore[arg-type] # del data data_array = reshape(data_array, segmentindex, shape) data_array = data_array.astype('=' + dtype.char, copy=False) if unpredict is not None: # unpredict is faster with native byte order data_array = unpredict(data_array, axis=-2, out=data_array) if _fullsize: data_array, shape = pad(data_array, shape) return data_array, segmentindex, shape return cache(decode_other) def segments( self, *, lock: threading.RLock | NullContext | None = None, maxworkers: int | None = None, func: Callable[..., Any] | None = None, # TODO: type this sort: bool = False, buffersize: int | None = None, _fullsize: bool | None = None, ) -> Iterator[ tuple[ NDArray[Any] | None, tuple[int, int, int, int, int], tuple[int, int, int, int], ] ]: """Return iterator over decoded tiles or strips. Parameters: lock: Reentrant lock to synchronize file seeks and reads. maxworkers: Maximum number of threads to concurrently decode segments. func: Function to process decoded segment. sort: Read segments from file in order of their offsets. buffersize: Approximate number of bytes to read from file in one pass. The default is :py:attr:`_TIFF.BUFFERSIZE`. _fullsize: Internal use. Yields: - Decoded segment or None for empty segments. - Position of segment in image array of normalized shape (separate sample, depth, length, width, contig sample). - Shape of segment (depth, length, width, contig samples). The shape of strips depends on their linear index. """ keyframe = self.keyframe # self or keyframe fh = self.parent.filehandle if lock is None: lock = fh.lock if _fullsize is None: _fullsize = keyframe.is_tiled decodeargs: dict[str, Any] = {'_fullsize': bool(_fullsize)} if keyframe.compression in {6, 7, 34892, 33007}: # JPEG decodeargs['jpegtables'] = self.jpegtables decodeargs['jpegheader'] = keyframe.jpegheader if func is None: def decode(args, decodeargs=decodeargs, decode=keyframe.decode): return decode(*args, **decodeargs) else: def decode(args, decodeargs=decodeargs, decode=keyframe.decode): return func(decode(*args, **decodeargs)) if maxworkers is None or maxworkers < 1: maxworkers = keyframe.maxworkers if maxworkers < 2: for segment in fh.read_segments( self.dataoffsets, self.databytecounts, lock=lock, sort=sort, buffersize=buffersize, flat=True, ): yield decode(segment) else: # reduce memory overhead by processing chunks of up to # buffersize of segments because ThreadPoolExecutor.map is not # collecting iterables lazily with ThreadPoolExecutor(maxworkers) as executor: for segments in fh.read_segments( self.dataoffsets, self.databytecounts, lock=lock, sort=sort, buffersize=buffersize, flat=False, ): yield from executor.map(decode, segments) def asarray( self, *, out: OutputType = None, squeeze: bool = True, lock: threading.RLock | NullContext | None = None, maxworkers: int | None = None, buffersize: int | None = None, ) -> NDArray[Any]: """Return image from page as NumPy array. Parameters: out: Specifies how image array is returned. By default, a new NumPy array is created. If a *numpy.ndarray*, a writable array to which the image is copied. If *'memmap'*, directly memory-map the image data in the file if possible; else create a memory-mapped array in a temporary file. If a *string* or *open file*, the file used to create a memory-mapped array. squeeze: Remove all length-1 dimensions (except X and Y) from image array. If *False*, return the image array with normalized 5-dimensional shape :py:attr:`TiffPage.shaped`. lock: Reentrant lock to synchronize seeks and reads from file. The default is the lock of the parent's file handle. maxworkers: Maximum number of threads to concurrently decode segments. If *None* or *0*, use up to :py:attr:`_TIFF.MAXWORKERS` threads. See remarks in :py:meth:`TiffFile.asarray`. buffersize: Approximate number of bytes to read from file in one pass. The default is :py:attr:`_TIFF.BUFFERSIZE`. Returns: NumPy array of decompressed, unpredicted, and unpacked image data read from Strip/Tile Offsets/ByteCounts, formatted according to shape and dtype metadata found in tags and arguments. Photometric conversion, premultiplied alpha, orientation, and colorimetry corrections are not applied. Specifically, CMYK images are not converted to RGB, MinIsWhite images are not inverted, color palettes are not applied, gamma is not corrected, and CFA images are not demosaciced. Exception are YCbCr JPEG compressed images, which are converted to RGB. Raises: ValueError: Format of image in file is not supported and cannot be decoded. """ keyframe = self.keyframe # self or keyframe if 0 in keyframe.shaped or keyframe._dtype is None: return numpy.empty((0,), keyframe.dtype) if len(self.dataoffsets) == 0: raise TiffFileError('missing data offset') fh = self.parent.filehandle if lock is None: lock = fh.lock if ( isinstance(out, str) and out == 'memmap' and keyframe.is_memmappable ): # direct memory map array in file with lock: closed = fh.closed if closed: warnings.warn( f'{self!r} reading array from closed file', UserWarning ) fh.open() result = fh.memmap_array( keyframe.parent.byteorder + keyframe._dtype.char, keyframe.shaped, offset=self.dataoffsets[0], ) elif keyframe.is_contiguous: # read contiguous bytes to array if keyframe.is_subsampled: raise NotImplementedError('chroma subsampling not supported') if out is not None: out = create_output(out, keyframe.shaped, keyframe._dtype) with lock: closed = fh.closed if closed: warnings.warn( f'{self!r} reading array from closed file', UserWarning ) fh.open() fh.seek(self.dataoffsets[0]) result = fh.read_array( keyframe.parent.byteorder + keyframe._dtype.char, product(keyframe.shaped), out=out, ) if keyframe.fillorder == 2: result = imagecodecs.bitorder_decode(result, out=result) if keyframe.predictor != 1: # predictors without compression unpredict = TIFF.UNPREDICTORS[keyframe.predictor] if keyframe.predictor == 1: result = unpredict(result, axis=-2, out=result) else: # floatpred cannot decode in-place out = unpredict(result, axis=-2, out=result) result[:] = out elif ( keyframe.jpegheader is not None and keyframe is self and 273 in self.tags # striped ... and self.is_tiled # but reported as tiled # TODO: imagecodecs can decode larger JPEG and self.imagewidth <= 65500 and self.imagelength <= 65500 ): # decode the whole NDPI JPEG strip with lock: closed = fh.closed if closed: warnings.warn( f'{self!r} reading array from closed file', UserWarning ) fh.open() fh.seek(self.tags[273].value[0]) # StripOffsets data = fh.read(self.tags[279].value[0]) # StripByteCounts decompress = TIFF.DECOMPRESSORS[self.compression] result = decompress( data, bitspersample=self.bitspersample, out=out, # shape=(self.imagelength, self.imagewidth) ) del data else: # decode individual strips or tiles with lock: closed = fh.closed if closed: warnings.warn( f'{self!r} reading array from closed file', UserWarning ) fh.open() keyframe.decode # init TiffPage.decode function under lock result = create_output(out, keyframe.shaped, keyframe._dtype) def func( decoderesult: tuple[ NDArray[Any] | None, tuple[int, int, int, int, int], tuple[int, int, int, int], ], keyframe: TiffPage = keyframe, out: NDArray[Any] = result, ) -> None: # copy decoded segments to output array segment, (s, d, h, w, _), shape = decoderesult if segment is None: out[ s, d : d + shape[0], h : h + shape[1], w : w + shape[2] ] = keyframe.nodata else: out[ s, d : d + shape[0], h : h + shape[1], w : w + shape[2] ] = segment[ : keyframe.imagedepth - d, : keyframe.imagelength - h, : keyframe.imagewidth - w, ] # except IndexError: # pass # corrupted file, for example, with too many strips for _ in self.segments( func=func, lock=lock, maxworkers=maxworkers, buffersize=buffersize, sort=True, _fullsize=False, ): pass result.shape = keyframe.shaped if squeeze: try: result.shape = keyframe.shape except ValueError as exc: logger().warning( f'{self!r} failed to reshape ' f'{result.shape} to {keyframe.shape}, raised {exc!r:.128}' ) if closed: # TODO: close file if an exception occurred above fh.close() return result def aszarr(self, **kwargs: Any) -> ZarrTiffStore: """Return image from page as Zarr 2 store. Parameters: **kwarg: Passed to :py:class:`ZarrTiffStore`. """ return ZarrTiffStore(self, **kwargs) def asrgb( self, *, uint8: bool = False, alpha: Container[int] | None = None, **kwargs: Any, ) -> NDArray[Any]: """Return image as RGB(A). Work in progress. Do not use. :meta private: """ data = self.asarray(**kwargs) keyframe = self.keyframe # self or keyframe if keyframe.photometric == PHOTOMETRIC.PALETTE: colormap = keyframe.colormap if colormap is None: raise ValueError('no colormap') if ( colormap.shape[1] < 2**keyframe.bitspersample or keyframe.dtype is None or keyframe.dtype.char not in 'BH' ): raise ValueError('cannot apply colormap') if uint8: if colormap.max() > 255: colormap >>= 8 colormap = colormap.astype(numpy.uint8) if 'S' in keyframe.axes: data = data[..., 0] if keyframe.planarconfig == 1 else data[0] data = apply_colormap(data, colormap) elif keyframe.photometric == PHOTOMETRIC.RGB: if keyframe.extrasamples: if alpha is None: alpha = EXTRASAMPLE for i, exs in enumerate(keyframe.extrasamples): if exs in EXTRASAMPLE: if keyframe.planarconfig == 1: data = data[..., [0, 1, 2, 3 + i]] else: data = data[:, [0, 1, 2, 3 + i]] break else: if keyframe.planarconfig == 1: data = data[..., :3] else: data = data[:, :3] # TODO: convert to uint8? elif keyframe.photometric == PHOTOMETRIC.MINISBLACK: raise NotImplementedError elif keyframe.photometric == PHOTOMETRIC.MINISWHITE: raise NotImplementedError elif keyframe.photometric == PHOTOMETRIC.SEPARATED: raise NotImplementedError else: raise NotImplementedError return data def _gettags( self, codes: Container[int] | None = None, /, lock: threading.RLock | None = None, ) -> list[tuple[int, TiffTag]]: """Return list of (code, TiffTag).""" return [ (tag.code, tag) for tag in self.tags if codes is None or tag.code in codes ] def _nextifd(self) -> int: """Return offset to next IFD from file.""" fh = self.parent.filehandle tiff = self.parent.tiff fh.seek(self.offset) tagno = struct.unpack(tiff.tagnoformat, fh.read(tiff.tagnosize))[0] fh.seek(self.offset + tiff.tagnosize + tagno * tiff.tagsize) return int( struct.unpack(tiff.offsetformat, fh.read(tiff.offsetsize))[0] ) def aspage(self) -> TiffPage: """Return TiffPage instance.""" return self @property def index(self) -> int: """Index of page in IFD chain.""" return self._index[-1] @property def treeindex(self) -> tuple[int, ...]: """Index of page in IFD tree.""" return self._index @property def keyframe(self) -> TiffPage: """Self.""" return self @keyframe.setter def keyframe(self, index: TiffPage) -> None: return @property def name(self) -> str: """Name of image array.""" index = self._index if len(self._index) > 1 else self._index[0] return f'TiffPage {index}' @property def ndim(self) -> int: """Number of dimensions in image array.""" return len(self.shape) @cached_property def dims(self) -> tuple[str, ...]: """Names of dimensions in image array.""" names = TIFF.AXES_NAMES return tuple(names[ax] for ax in self.axes) @cached_property def sizes(self) -> dict[str, int]: """Ordered map of dimension names to lengths.""" shape = self.shape names = TIFF.AXES_NAMES return {names[ax]: shape[i] for i, ax in enumerate(self.axes)} @cached_property def coords(self) -> dict[str, NDArray[Any]]: """Ordered map of dimension names to coordinate arrays.""" resolution = self.get_resolution() coords: dict[str, NDArray[Any]] = {} for ax, size in zip(self.axes, self.shape): name = TIFF.AXES_NAMES[ax] value = None step: int | float = 1 if ax == 'X': step = resolution[0] elif ax == 'Y': step = resolution[1] elif ax == 'S': value = self._sample_names() elif ax == 'Z': # a ZResolution tag doesn't exist. # use XResolution if it agrees with YResolution if resolution[0] == resolution[1]: step = resolution[0] if value is not None: coords[name] = numpy.asarray(value) elif step == 0 or step == 1 or size == 0: coords[name] = numpy.arange(size) else: coords[name] = numpy.linspace( 0, size / step, size, endpoint=False, dtype=numpy.float32 ) assert len(coords[name]) == size return coords @cached_property def attr(self) -> dict[str, Any]: """Arbitrary metadata associated with image array.""" # TODO: what to return? return {} @cached_property def size(self) -> int: """Number of elements in image array.""" return product(self.shape) @cached_property def nbytes(self) -> int: """Number of bytes in image array.""" if self.dtype is None: return 0 return self.size * self.dtype.itemsize @property def colormap(self) -> NDArray[numpy.uint16] | None: """Value of Colormap tag.""" return self.tags.valueof(320) @property def iccprofile(self) -> bytes | None: """Value of InterColorProfile tag.""" return self.tags.valueof(34675) @property def transferfunction(self) -> NDArray[numpy.uint16] | None: """Value of TransferFunction tag.""" return self.tags.valueof(301) def get_resolution( self, unit: RESUNIT | int | str | None = None, scale: float | int | None = None, ) -> tuple[int | float, int | float]: """Return number of pixels per unit in X and Y dimensions. By default, the XResolution and YResolution tag values are returned. Missing tag values are set to 1. Parameters: unit: Unit of measurement of returned values. The default is the value of the ResolutionUnit tag. scale: Factor to convert resolution values to meter unit. The default is determined from the ResolutionUnit tag. """ scales = { 1: 1, # meter, no unit 2: 100 / 2.54, # INCH 3: 100, # CENTIMETER 4: 1000, # MILLIMETER 5: 1000000, # MICROMETER } if unit is not None: unit = enumarg(RESUNIT, unit) try: if scale is None: resolutionunit = self.tags.valueof(296, default=2) scale = scales[resolutionunit] except Exception as exc: logger().warning( f'{self!r} raised {exc!r:.128}' ) scale = 1 else: scale2 = scales[unit] if scale % scale2 == 0: scale //= scale2 else: scale /= scale2 elif scale is None: scale = 1 resolution: list[int | float] = [] n: int d: int for code in 282, 283: try: n, d = self.tags.valueof(code, default=(1, 1)) if d == 0: value = n * scale elif n % d == 0: value = n // d * scale else: value = n / d * scale except Exception: value = 1 resolution.append(value) return resolution[0], resolution[1] @cached_property def resolution(self) -> tuple[float, float]: """Number of pixels per resolutionunit in X and Y directions.""" # values are returned in (somewhat unexpected) XY order to # keep symmetry with the TiffWriter.write resolution argument resolution = self.get_resolution() return float(resolution[0]), float(resolution[1]) @property def resolutionunit(self) -> int: """Unit of measurement for X and Y resolutions.""" return self.tags.valueof(296, default=2) @property def datetime(self) -> DateTime | None: """Date and time of image creation.""" value = self.tags.valueof(306) if value is None: return None try: return strptime(value) except Exception: pass return None @property def tile(self) -> tuple[int, ...] | None: """Tile depth, length, and width.""" if not self.is_tiled: return None if self.tiledepth > 1: return (self.tiledepth, self.tilelength, self.tilewidth) return (self.tilelength, self.tilewidth) @cached_property def chunks(self) -> tuple[int, ...]: """Shape of images in tiles or strips.""" shape: list[int] = [] if self.tiledepth > 1: shape.append(self.tiledepth) if self.is_tiled: shape.extend((self.tilelength, self.tilewidth)) else: shape.extend((self.rowsperstrip, self.imagewidth)) if self.planarconfig == 1 and self.samplesperpixel > 1: shape.append(self.samplesperpixel) return tuple(shape) @cached_property def chunked(self) -> tuple[int, ...]: """Shape of chunked image.""" shape: list[int] = [] if self.planarconfig == 2 and self.samplesperpixel > 1: shape.append(self.samplesperpixel) if self.is_tiled: if self.imagedepth > 1: shape.append( (self.imagedepth + self.tiledepth - 1) // self.tiledepth ) shape.append( (self.imagelength + self.tilelength - 1) // self.tilelength ) shape.append( (self.imagewidth + self.tilewidth - 1) // self.tilewidth ) else: if self.imagedepth > 1: shape.append(self.imagedepth) shape.append( (self.imagelength + self.rowsperstrip - 1) // self.rowsperstrip ) shape.append(1) if self.planarconfig == 1 and self.samplesperpixel > 1: shape.append(1) return tuple(shape) @cached_property def hash(self) -> int: """Checksum to identify pages in same series. Pages with the same hash can use the same decode function. The hash is calculated from the following properties: :py:attr:`TiffFile.tiff`, :py:attr:`TiffPage.shaped`, :py:attr:`TiffPage.rowsperstrip`, :py:attr:`TiffPage.tilewidth`, :py:attr:`TiffPage.tilelength`, :py:attr:`TiffPage.tiledepth`, :py:attr:`TiffPage.sampleformat`, :py:attr:`TiffPage.bitspersample`, :py:attr:`TiffPage.fillorder`, :py:attr:`TiffPage.predictor`, :py:attr:`TiffPage.compression`, :py:attr:`TiffPage.extrasamples`, and :py:attr:`TiffPage.photometric`. """ return hash( self.shaped + ( self.parent.tiff, self.rowsperstrip, self.tilewidth, self.tilelength, self.tiledepth, self.sampleformat, self.bitspersample, self.fillorder, self.predictor, self.compression, self.extrasamples, self.photometric, ) ) @cached_property def pages(self) -> TiffPages | None: """Sequence of sub-pages, SubIFDs.""" if 330 not in self.tags: return None return TiffPages(self, index=self.index) @cached_property def maxworkers(self) -> int: """Maximum number of threads for decoding segments. A value of 0 disables multi-threading also when stacking pages. """ if self.is_contiguous or self.dtype is None: return 0 if self.compression in TIFF.IMAGE_COMPRESSIONS: return min(TIFF.MAXWORKERS, len(self.dataoffsets)) bytecount = product(self.chunks) * self.dtype.itemsize if bytecount < 2048: # disable multi-threading for small segments return 0 if self.compression == 5 and bytecount < 14336: # disable multi-threading for small LZW compressed segments return 0 if len(self.dataoffsets) < 4: return 1 if self.compression != 1 or self.fillorder != 1 or self.predictor != 1: if imagecodecs is not None: return min(TIFF.MAXWORKERS, len(self.dataoffsets)) return 2 # optimum for large number of uncompressed tiles @cached_property def is_contiguous(self) -> bool: """Image data is stored contiguously. Contiguous image data can be read from ``offset=TiffPage.dataoffsets[0]`` with ``size=TiffPage.nbytes``. Excludes prediction and fillorder. """ if ( self.sampleformat == 5 or self.compression != 1 or self.bitspersample not in {8, 16, 32, 64} ): return False if 322 in self.tags: # TileWidth if ( self.imagewidth != self.tilewidth or self.imagelength % self.tilelength or self.tilewidth % 16 or self.tilelength % 16 ): return False if ( 32997 in self.tags # ImageDepth and 32998 in self.tags # TileDepth and ( self.imagelength != self.tilelength or self.imagedepth % self.tiledepth ) ): return False offsets = self.dataoffsets bytecounts = self.databytecounts if len(offsets) == 0: return False if len(offsets) == 1: return True if self.is_stk or self.is_lsm: return True if sum(bytecounts) != self.nbytes: return False if all( bytecounts[i] != 0 and offsets[i] + bytecounts[i] == offsets[i + 1] for i in range(len(offsets) - 1) ): return True return False @cached_property def is_final(self) -> bool: """Image data are stored in final form. Excludes byte-swapping.""" return ( self.is_contiguous and self.fillorder == 1 and self.predictor == 1 and not self.is_subsampled ) @cached_property def is_memmappable(self) -> bool: """Image data in file can be memory-mapped to NumPy array.""" return ( self.parent.filehandle.is_file and self.is_final # and (self.bitspersample == 8 or self.parent.isnative) # aligned? and self.dtype is not None and self.dataoffsets[0] % self.dtype.itemsize == 0 ) def __repr__(self) -> str: index = self._index if len(self._index) > 1 else self._index[0] return f'' def __str__(self) -> str: return self._str() def _str(self, detail: int = 0, width: int = 79) -> str: """Return string containing information about TiffPage.""" if self.keyframe != self: return TiffFrame._str( self, detail, width # type: ignore[arg-type] ) attr = '' for name in ('memmappable', 'final', 'contiguous'): attr = getattr(self, 'is_' + name) if attr: attr = name.upper() break def tostr(name: str, /, skip: int = 1) -> str: obj = getattr(self, name) if obj == skip: return '' try: value = getattr(obj, 'name') except AttributeError: return '' return str(value) info = ' '.join( s.lower() for s in ( 'x'.join(str(i) for i in self.shape), f'{SAMPLEFORMAT(self.sampleformat).name}{self.bitspersample}', ' '.join( i for i in ( PHOTOMETRIC(self.photometric).name, 'REDUCED' if self.is_reduced else '', 'MASK' if self.is_mask else '', 'TILED' if self.is_tiled else '', tostr('compression'), tostr('planarconfig'), tostr('predictor'), tostr('fillorder'), ) + (attr,) if i ), '|'.join(f.upper() for f in sorted(self.flags)), ) if s ) index = self._index if len(self._index) > 1 else self._index[0] info = f'TiffPage {index} @{self.offset} {info}' if detail <= 0: return info info_list = [info, self.tags._str(detail + 1, width=width)] if detail > 1: for name in ('ndpi',): name = name + '_tags' attr = getattr(self, name, '') if attr: info_list.append( f'{name.upper()}\n' f'{pformat(attr, width=width, height=detail * 8)}' ) if detail > 3: try: data = self.asarray() info_list.append( f'DATA\n' f'{pformat(data, width=width, height=detail * 8)}' ) except Exception: pass return '\n\n'.join(info_list) def _sample_names(self) -> list[str] | None: """Return names of samples.""" if 'S' not in self.axes: return None samples = self.shape[self.axes.find('S')] extrasamples = len(self.extrasamples) if samples < 1 or extrasamples > 2: return None if self.photometric == 0: names = ['WhiteIsZero'] elif self.photometric == 1: names = ['BlackIsZero'] elif self.photometric == 2: names = ['Red', 'Green', 'Blue'] elif self.photometric == 5: names = ['Cyan', 'Magenta', 'Yellow', 'Black'] elif self.photometric == 6: if self.compression in {6, 7, 34892, 33007}: # YCBCR -> RGB for JPEG names = ['Red', 'Green', 'Blue'] else: names = ['Luma', 'Cb', 'Cr'] else: return None if extrasamples > 0: names += [enumarg(EXTRASAMPLE, self.extrasamples[0]).name.title()] if extrasamples > 1: names += [enumarg(EXTRASAMPLE, self.extrasamples[1]).name.title()] if len(names) != samples: return None return names @cached_property def flags(self) -> set[str]: r"""Set of ``is\_\*`` properties that are True.""" return { name.lower() for name in TIFF.PAGE_FLAGS if getattr(self, 'is_' + name) } @cached_property def andor_tags(self) -> dict[str, Any] | None: """Consolidated metadata from Andor tags.""" if not self.is_andor: return None result = {'Id': self.tags[4864].value} # AndorId for tag in self.tags: # list(self.tags.values()): code = tag.code if not 4864 < code < 5031: continue name = tag.name name = name[5:] if len(name) > 5 else name result[name] = tag.value # del self.tags[code] return result @cached_property def epics_tags(self) -> dict[str, Any] | None: """Consolidated metadata from EPICS areaDetector tags. Use the :py:func:`epics_datetime` function to get a datetime object from the epicsTSSec and epicsTSNsec tags. """ if not self.is_epics: return None result = {} for tag in self.tags: # list(self.tags.values()): code = tag.code if not 65000 <= code < 65500: continue value = tag.value if code == 65000: # not a POSIX timestamp # https://github.com/bluesky/area-detector-handlers/issues/20 result['timeStamp'] = float(value) elif code == 65001: result['uniqueID'] = int(value) elif code == 65002: result['epicsTSSec'] = int(value) elif code == 65003: result['epicsTSNsec'] = int(value) else: key, value = value.split(':', 1) result[key] = astype(value) # del self.tags[code] return result @cached_property def ndpi_tags(self) -> dict[str, Any] | None: """Consolidated metadata from Hamamatsu NDPI tags.""" # TODO: parse 65449 ini style comments if not self.is_ndpi: return None tags = self.tags result = {} for name in ('Make', 'Model', 'Software'): result[name] = tags[name].value for code, name in TIFF.NDPI_TAGS.items(): if code in tags: result[name] = tags[code].value # del tags[code] if 'McuStarts' in result: mcustarts = result['McuStarts'] if 'McuStartsHighBytes' in result: high = result['McuStartsHighBytes'].astype(numpy.uint64) high <<= 32 mcustarts = mcustarts.astype(numpy.uint64) mcustarts += high del result['McuStartsHighBytes'] result['McuStarts'] = mcustarts return result @cached_property def geotiff_tags(self) -> dict[str, Any] | None: """Consolidated metadata from GeoTIFF tags.""" if not self.is_geotiff: return None tags = self.tags gkd = tags.valueof(34735) # GeoKeyDirectoryTag if gkd is None or len(gkd) < 2 or gkd[0] != 1: logger().warning(f'{self!r} invalid GeoKeyDirectoryTag') return {} result = { 'KeyDirectoryVersion': gkd[0], 'KeyRevision': gkd[1], 'KeyRevisionMinor': gkd[2], # 'NumberOfKeys': gkd[3], } # deltags = ['GeoKeyDirectoryTag'] geokeys = TIFF.GEO_KEYS geocodes = TIFF.GEO_CODES for index in range(gkd[3]): try: keyid, tagid, count, offset = gkd[ 4 + index * 4 : index * 4 + 8 ] except Exception as exc: logger().warning( f'{self!r} corrupted GeoKeyDirectoryTag ' f'raised {exc!r:.128}' ) continue if tagid == 0: value = offset else: try: value = tags[tagid].value[offset : offset + count] except TiffFileError as exc: logger().warning( f'{self!r} corrupted GeoKeyDirectoryTag {tagid} ' f'raised {exc!r:.128}' ) continue except KeyError as exc: logger().warning( f'{self!r} GeoKeyDirectoryTag {tagid} not found, ' f'raised {exc!r:.128}' ) continue if tagid == 34737 and count > 1 and value[-1] == '|': value = value[:-1] value = value if count > 1 else value[0] if keyid in geocodes: try: value = geocodes[keyid](value) except Exception: pass try: key = geokeys(keyid).name except ValueError: key = keyid result[key] = value value = tags.valueof(33920) # IntergraphMatrixTag if value is not None: value = numpy.array(value) if value.size == 16: value = value.reshape((4, 4)).tolist() result['IntergraphMatrix'] = value value = tags.valueof(33550) # ModelPixelScaleTag if value is not None: result['ModelPixelScale'] = numpy.array(value).tolist() value = tags.valueof(33922) # ModelTiepointTag if value is not None: value = numpy.array(value).reshape((-1, 6)).squeeze().tolist() result['ModelTiepoint'] = value value = tags.valueof(34264) # ModelTransformationTag if value is not None: value = numpy.array(value).reshape((4, 4)).tolist() result['ModelTransformation'] = value # if 33550 in tags and 33922 in tags: # sx, sy, sz = tags[33550].value # ModelPixelScaleTag # tiepoints = tags[33922].value # ModelTiepointTag # transforms = [] # for tp in range(0, len(tiepoints), 6): # i, j, k, x, y, z = tiepoints[tp : tp + 6] # transforms.append( # [ # [sx, 0.0, 0.0, x - i * sx], # [0.0, -sy, 0.0, y + j * sy], # [0.0, 0.0, sz, z - k * sz], # [0.0, 0.0, 0.0, 1.0], # ] # ) # if len(tiepoints) == 6: # transforms = transforms[0] # result['ModelTransformation'] = transforms rpcc = tags.valueof(50844) # RPCCoefficientTag if rpcc is not None: result['RPCCoefficient'] = { 'ERR_BIAS': rpcc[0], 'ERR_RAND': rpcc[1], 'LINE_OFF': rpcc[2], 'SAMP_OFF': rpcc[3], 'LAT_OFF': rpcc[4], 'LONG_OFF': rpcc[5], 'HEIGHT_OFF': rpcc[6], 'LINE_SCALE': rpcc[7], 'SAMP_SCALE': rpcc[8], 'LAT_SCALE': rpcc[9], 'LONG_SCALE': rpcc[10], 'HEIGHT_SCALE': rpcc[11], 'LINE_NUM_COEFF': rpcc[12:33], 'LINE_DEN_COEFF ': rpcc[33:53], 'SAMP_NUM_COEFF': rpcc[53:73], 'SAMP_DEN_COEFF': rpcc[73:], } return result @cached_property def shaped_description(self) -> str | None: """Description containing array shape if exists, else None.""" for description in (self.description, self.description1): if not description or '"mibi.' in description: return None if description[:1] == '{' and '"shape":' in description: return description if description[:6] == 'shape=': return description return None @cached_property def imagej_description(self) -> str | None: """ImageJ description if exists, else None.""" for description in (self.description, self.description1): if not description: return None if description[:7] == 'ImageJ=' or description[:7] == 'SCIFIO=': return description return None @cached_property def is_jfif(self) -> bool: """JPEG compressed segments contain JFIF metadata.""" if ( self.compression not in {6, 7, 34892, 33007} or len(self.dataoffsets) < 1 or self.dataoffsets[0] == 0 or len(self.databytecounts) < 1 or self.databytecounts[0] < 11 ): return False fh = self.parent.filehandle fh.seek(self.dataoffsets[0] + 6) data = fh.read(4) return data == b'JFIF' # or data == b'Exif' @property def is_frame(self) -> bool: """Object is :py:class:`TiffFrame` instance.""" return False @property def is_virtual(self) -> bool: """Page does not have IFD structure in file.""" return False @property def is_subifd(self) -> bool: """Page is SubIFD of another page.""" return len(self._index) > 1 @property def is_reduced(self) -> bool: """Page is reduced image of another image.""" return bool(self.subfiletype & 0b1) @property def is_multipage(self) -> bool: """Page is part of multi-page image.""" return bool(self.subfiletype & 0b10) @property def is_mask(self) -> bool: """Page is transparency mask for another image.""" return bool(self.subfiletype & 0b100) @property def is_mrc(self) -> bool: """Page is part of Mixed Raster Content.""" return bool(self.subfiletype & 0b1000) @property def is_tiled(self) -> bool: """Page contains tiled image.""" return self.tilewidth > 0 # return 322 in self.tags # TileWidth @property def is_subsampled(self) -> bool: """Page contains chroma subsampled image.""" if self.subsampling is not None: return self.subsampling != (1, 1) return self.photometric == 6 # YCbCr # RGB JPEG usually stored as subsampled YCbCr # self.compression == 7 # and self.photometric == 2 # and self.planarconfig == 1 @property def is_imagej(self) -> bool: """Page contains ImageJ description metadata.""" return self.imagej_description is not None @property def is_shaped(self) -> bool: """Page contains Tifffile JSON metadata.""" return self.shaped_description is not None @property def is_mdgel(self) -> bool: """Page contains MDFileTag tag.""" return ( 37701 not in self.tags # AgilentBinary and 33445 in self.tags # MDFileTag ) @property def is_agilent(self) -> bool: """Page contains Agilent Technologies tags.""" # tag 270 and 285 contain color names return 285 in self.tags and 37701 in self.tags # AgilentBinary @property def is_mediacy(self) -> bool: """Page contains Media Cybernetics Id tag.""" tag = self.tags.get(50288) # MC_Id try: return tag is not None and tag.value[:7] == b'MC TIFF' except Exception: return False @property def is_stk(self) -> bool: """Page contains UIC1Tag tag.""" return 33628 in self.tags @property def is_lsm(self) -> bool: """Page contains CZ_LSMINFO tag.""" return 34412 in self.tags @property def is_fluoview(self) -> bool: """Page contains FluoView MM_STAMP tag.""" return 34362 in self.tags @property def is_nih(self) -> bool: """Page contains NIHImageHeader tag.""" return 43314 in self.tags @property def is_volumetric(self) -> bool: """Page contains SGI ImageDepth tag with value > 1.""" return self.imagedepth > 1 @property def is_vista(self) -> bool: """Software tag is 'ISS Vista'.""" return self.software == 'ISS Vista' @property def is_metaseries(self) -> bool: """Page contains MDS MetaSeries metadata in ImageDescription tag.""" if self.index != 0 or self.software != 'MetaSeries': return False d = self.description return d.startswith('') and d.endswith('') @property def is_ome(self) -> bool: """Page contains OME-XML in ImageDescription tag.""" if self.index != 0 or not self.description: return False return self.description[-10:].strip().endswith('OME>') @property def is_scn(self) -> bool: """Page contains Leica SCN XML in ImageDescription tag.""" if self.index != 0 or not self.description: return False return self.description[-10:].strip().endswith('') @property def is_micromanager(self) -> bool: """Page contains MicroManagerMetadata tag.""" return 51123 in self.tags @property def is_andor(self) -> bool: """Page contains Andor Technology tags 4864-5030.""" return 4864 in self.tags @property def is_pilatus(self) -> bool: """Page contains Pilatus tags.""" return self.software[:8] == 'TVX TIFF' and self.description[:2] == '# ' @property def is_epics(self) -> bool: """Page contains EPICS areaDetector tags.""" return ( self.description == 'EPICS areaDetector' or self.software == 'EPICS areaDetector' ) @property def is_tvips(self) -> bool: """Page contains TVIPS metadata.""" return 37706 in self.tags @property def is_fei(self) -> bool: """Page contains FEI_SFEG or FEI_HELIOS tags.""" return 34680 in self.tags or 34682 in self.tags @property def is_sem(self) -> bool: """Page contains CZ_SEM tag.""" return 34118 in self.tags @property def is_svs(self) -> bool: """Page contains Aperio metadata.""" return self.description[:7] == 'Aperio ' @property def is_bif(self) -> bool: """Page contains Ventana metadata.""" try: return 700 in self.tags and ( # avoid reading XMP tag from file at this point # b' bool: """Page contains ScanImage metadata.""" return ( self.software[:3] == 'SI.' or self.description[:6] == 'state.' or 'scanimage.SI' in self.description[-256:] ) @property def is_indica(self) -> bool: """Page contains IndicaLabs metadata.""" return self.software[:21] == 'IndicaLabsImageWriter' @property def is_avs(self) -> bool: """Page contains Argos AVS XML metadata.""" try: return ( 65000 in self.tags and self.tags.valueof(65000)[:6] == ' bool: """Page contains PerkinElmer tissue images metadata.""" # The ImageDescription tag contains XML with a top-level # element return self.software[:15] == 'PerkinElmer-QPI' @property def is_geotiff(self) -> bool: """Page contains GeoTIFF metadata.""" return 34735 in self.tags # GeoKeyDirectoryTag @property def is_gdal(self) -> bool: """Page contains GDAL metadata.""" # startswith '' return 42112 in self.tags # GDAL_METADATA @property def is_astrotiff(self) -> bool: """Page contains AstroTIFF FITS metadata.""" return ( self.description[:7] == 'SIMPLE ' and self.description[-3:] == 'END' ) @property def is_streak(self) -> bool: """Page contains Hamamatsu streak metadata.""" return ( self.description[:1] == '[' and '],' in self.description[1:32] # and self.tags.get(315, '').value[:19] == 'Copyright Hamamatsu' ) @property def is_dng(self) -> bool: """Page contains DNG metadata.""" return 50706 in self.tags # DNGVersion @property def is_tiffep(self) -> bool: """Page contains TIFF/EP metadata.""" return 37398 in self.tags # TIFF/EPStandardID @property def is_sis(self) -> bool: """Page contains Olympus SIS metadata.""" return 33560 in self.tags or 33471 in self.tags @property def is_ndpi(self) -> bool: """Page contains NDPI metadata.""" return 65420 in self.tags and 271 in self.tags @property def is_philips(self) -> bool: """Page contains Philips DP metadata.""" return self.software[:10] == 'Philips DP' and self.description[ -16: ].strip().endswith('') @property def is_eer(self) -> bool: """Page contains EER acquisition metadata.""" return ( self.parent.is_bigtiff and self.compression in {1, 65000, 65001, 65002} and 65001 in self.tags and self.tags[65001].dtype == 7 ) @final class TiffFrame: """Lightweight TIFF image file directory (IFD). The purpose of TiffFrame is to reduce resource usage and speed up reading image data from file compared to TiffPage. Properties other than `offset`, `index`, `dataoffsets`, `databytecounts`, `subifds`, and `jpegtables` are assumed to be identical with a specified TiffPage instance, the keyframe. TiffFrame instances have no `tags` property. Virtual frames just reference the image data in the file. They may not have an IFD structure in the file. TiffFrame instances are not thread-safe. All attributes are read-only. Parameters: parent: TiffFile instance to read frame from. The file handle position must be at an offset to an IFD structure. Only a limited number of tag values are read from file. index: Index of frame in IFD tree. offset: Position of frame in file. keyframe: TiffPage instance with same hash as frame. dataoffsets: Data offsets of "virtual frame". databytecounts: Data bytecounts of "virtual frame". """ __slots__ = ( 'parent', 'offset', 'dataoffsets', 'databytecounts', 'subifds', 'jpegtables', '_keyframe', '_index', ) is_mdgel: bool = False pages: TiffPages | None = None # tags = {} parent: TiffFile """TiffFile instance frame belongs to.""" offset: int """Position of frame in file.""" dataoffsets: tuple[int, ...] """Positions of strips or tiles in file.""" databytecounts: tuple[int, ...] """Size of strips or tiles in file.""" subifds: tuple[int, ...] | None """Positions of SubIFDs in file.""" jpegtables: bytes | None """JPEG quantization and/or Huffman tables.""" _keyframe: TiffPage | None _index: tuple[int, ...] # index of frame in IFD tree. def __init__( self, parent: TiffFile, /, index: int | Sequence[int], *, offset: int | None = None, keyframe: TiffPage | None = None, dataoffsets: tuple[int, ...] | None = None, databytecounts: tuple[int, ...] | None = None, ): self._keyframe = None self.parent = parent self.offset = int(offset) if offset else 0 self.subifds = None self.jpegtables = None self.dataoffsets = () self.databytecounts = () if isinstance(index, int): self._index = (index,) else: self._index = tuple(index) if dataoffsets is not None and databytecounts is not None: # initialize "virtual frame" from offsets and bytecounts self.offset = 0 if offset is None else offset self.dataoffsets = dataoffsets self.databytecounts = databytecounts self._keyframe = keyframe return if offset is None: self.offset = parent.filehandle.tell() else: parent.filehandle.seek(offset) if keyframe is None: tags = {273, 279, 324, 325, 330, 347} elif keyframe.is_contiguous: # use databytecounts from keyframe tags = {256, 273, 324, 330} self.databytecounts = keyframe.databytecounts else: tags = {256, 273, 279, 324, 325, 330, 347} for code, tag in self._gettags(tags): if code in {273, 324}: self.dataoffsets = tag.value elif code in {279, 325}: self.databytecounts = tag.value elif code == 330: self.subifds = tag.value elif code == 347: self.jpegtables = tag.value elif keyframe is None or ( code == 256 and keyframe.imagewidth != tag.value ): raise RuntimeError('incompatible keyframe') if not self.dataoffsets: logger().warning(f'{self!r} is missing required tags') elif keyframe is not None and len(self.dataoffsets) != len( keyframe.dataoffsets ): raise RuntimeError('incompatible keyframe') if keyframe is not None: self.keyframe = keyframe def _gettags( self, codes: Container[int] | None = None, /, lock: threading.RLock | None = None, ) -> list[tuple[int, TiffTag]]: """Return list of (code, TiffTag) from file.""" fh = self.parent.filehandle tiff = self.parent.tiff unpack = struct.unpack rlock: Any = NullContext() if lock is None else lock tags = [] with rlock: fh.seek(self.offset) try: tagno = unpack(tiff.tagnoformat, fh.read(tiff.tagnosize))[0] if tagno > 4096: raise ValueError(f'suspicious number of tags {tagno}') except Exception as exc: raise TiffFileError( f'corrupted tag list @{self.offset}' ) from exc tagoffset = self.offset + tiff.tagnosize # fh.tell() tagsize = tiff.tagsize tagindex = -tagsize codeformat = tiff.tagformat1[:2] tagbytes = fh.read(tagsize * tagno) for _ in range(tagno): tagindex += tagsize code = unpack(codeformat, tagbytes[tagindex : tagindex + 2])[0] if codes and code not in codes: continue try: tag = TiffTag.fromfile( self.parent, offset=tagoffset + tagindex, header=tagbytes[tagindex : tagindex + tagsize], ) except TiffFileError as exc: logger().error( f'{self!r} raised {exc!r:.128}' ) continue tags.append((code, tag)) return tags def _nextifd(self) -> int: """Return offset to next IFD from file.""" return TiffPage._nextifd(self) # type: ignore[arg-type] def aspage(self) -> TiffPage: """Return TiffPage from file. Raise ValueError if frame is virtual. """ if self.is_virtual: raise ValueError('cannot return virtual frame as page') fh = self.parent.filehandle closed = fh.closed if closed: # this is an inefficient resort in case a user calls aspage # of a TiffFrame with a closed FileHandle. warnings.warn( f'{self!r} reading TiffPage from closed file', UserWarning ) fh.open() try: fh.seek(self.offset) page = TiffPage(self.parent, index=self.index) finally: if closed: fh.close() return page def asarray(self, *args: Any, **kwargs: Any) -> NDArray[Any]: """Return image from frame as NumPy array. Parameters: **kwargs: Arguments passed to :py:meth:`TiffPage.asarray`. """ return TiffPage.asarray( self, *args, **kwargs # type: ignore[arg-type] ) def aszarr(self, **kwargs: Any) -> ZarrTiffStore: """Return image from frame as Zarr 2 store. Parameters: **kwarg: Arguments passed to :py:class:`ZarrTiffStore`. """ return ZarrTiffStore(self, **kwargs) def asrgb(self, *args: Any, **kwargs: Any) -> NDArray[Any]: """Return image from frame as RGB(A). Work in progress. Do not use. :meta private: """ return TiffPage.asrgb(self, *args, **kwargs) # type: ignore[arg-type] def segments(self, *args: Any, **kwargs: Any) -> Iterator[ tuple[ NDArray[Any] | None, tuple[int, int, int, int, int], tuple[int, int, int, int], ] ]: """Return iterator over decoded tiles or strips. Parameters: **kwargs: Arguments passed to :py:meth:`TiffPage.segments`. :meta private: """ return TiffPage.segments( self, *args, **kwargs # type: ignore[arg-type] ) @property def index(self) -> int: """Index of frame in IFD chain.""" return self._index[-1] @property def treeindex(self) -> tuple[int, ...]: """Index of frame in IFD tree.""" return self._index @property def keyframe(self) -> TiffPage | None: """TiffPage with same properties as this frame.""" return self._keyframe @keyframe.setter def keyframe(self, keyframe: TiffPage, /) -> None: if self._keyframe == keyframe: return if self._keyframe is not None: raise RuntimeError('cannot reset keyframe') if len(self.dataoffsets) != len(keyframe.dataoffsets): raise RuntimeError('incompatible keyframe') if keyframe.is_contiguous: self.databytecounts = keyframe.databytecounts self._keyframe = keyframe @property def is_frame(self) -> bool: """Object is :py:class:`TiffFrame` instance.""" return True @property def is_virtual(self) -> bool: """Frame does not have IFD structure in file.""" return self.offset <= 0 @property def is_subifd(self) -> bool: """Frame is SubIFD of another page.""" return len(self._index) > 1 @property def is_final(self) -> bool: assert self._keyframe is not None return self._keyframe.is_final @property def is_contiguous(self) -> bool: assert self._keyframe is not None return self._keyframe.is_contiguous @property def is_memmappable(self) -> bool: assert self._keyframe is not None return self._keyframe.is_memmappable @property def hash(self) -> int: assert self._keyframe is not None return self._keyframe.hash @property def shape(self) -> tuple[int, ...]: assert self._keyframe is not None return self._keyframe.shape @property def shaped(self) -> tuple[int, int, int, int, int]: assert self._keyframe is not None return self._keyframe.shaped @property def chunks(self) -> tuple[int, ...]: assert self._keyframe is not None return self._keyframe.chunks @property def chunked(self) -> tuple[int, ...]: assert self._keyframe is not None return self._keyframe.chunked @property def tile(self) -> tuple[int, ...] | None: assert self._keyframe is not None return self._keyframe.tile @property def name(self) -> str: index = self._index if len(self._index) > 1 else self._index[0] return f'TiffFrame {index}' @property def ndim(self) -> int: assert self._keyframe is not None return self._keyframe.ndim @property def dims(self) -> tuple[str, ...]: assert self._keyframe is not None return self._keyframe.dims @property def sizes(self) -> dict[str, int]: assert self._keyframe is not None return self._keyframe.sizes @property def coords(self) -> dict[str, NDArray[Any]]: assert self._keyframe is not None return self._keyframe.coords @property def size(self) -> int: assert self._keyframe is not None return self._keyframe.size @property def nbytes(self) -> int: assert self._keyframe is not None return self._keyframe.nbytes @property def dtype(self) -> numpy.dtype[Any] | None: assert self._keyframe is not None return self._keyframe.dtype @property def axes(self) -> str: assert self._keyframe is not None return self._keyframe.axes def get_resolution( self, unit: RESUNIT | int | None = None, scale: float | int | None = None, ) -> tuple[int | float, int | float]: assert self._keyframe is not None return self._keyframe.get_resolution(unit, scale) @property def resolution(self) -> tuple[float, float]: assert self._keyframe is not None return self._keyframe.resolution @property def resolutionunit(self) -> int: assert self._keyframe is not None return self._keyframe.resolutionunit @property def datetime(self) -> DateTime | None: # TODO: TiffFrame.datetime can differ from TiffPage.datetime? assert self._keyframe is not None return self._keyframe.datetime @property def compression(self) -> int: assert self._keyframe is not None return self._keyframe.compression @property def decode( self, ) -> Callable[ ..., tuple[ NDArray[Any] | None, tuple[int, int, int, int, int], tuple[int, int, int, int], ], ]: assert self._keyframe is not None return self._keyframe.decode def __repr__(self) -> str: index = self._index if len(self._index) > 1 else self._index[0] return f'' def __str__(self) -> str: return self._str() def _str(self, detail: int = 0, width: int = 79) -> str: """Return string containing information about TiffFrame.""" if self._keyframe is None: info = '' kf = None else: info = ' '.join( s for s in ( 'x'.join(str(i) for i in self.shape), str(self.dtype), ) ) kf = self._keyframe._str(width=width - 11) if detail > 3: of = pformat(self.dataoffsets, width=width - 9, height=detail - 3) bc = pformat( self.databytecounts, width=width - 13, height=detail - 3 ) info = f'\n Keyframe {kf}\n Offsets {of}\n Bytecounts {bc}' index = self._index if len(self._index) > 1 else self._index[0] return f'TiffFrame {index} @{self.offset} {info}' @final class TiffPages(Sequence[TiffPage | TiffFrame]): """Sequence of TIFF image file directories (IFD chain). TiffPages instances have a state, such as a cache and keyframe, and are not thread-safe. All attributes are read-only. Parameters: arg: If a *TiffFile*, the file position must be at offset to offset to TiffPage. If a *TiffPage* or *TiffFrame*, page offsets are read from the SubIFDs tag. Only the first page is initially read from the file. index: Position of IFD chain in IFD tree. """ parent: TiffFile | None = None """TiffFile instance pages belongs to.""" _pages: list[TiffPage | TiffFrame | int] # list of pages _keyframe: TiffPage | None _tiffpage: type[TiffPage] | type[TiffFrame] # class used for reading pages _indexed: bool _cached: bool _cache: bool _offset: int _nextpageoffset: int | None _index: tuple[int, ...] | None def __init__( self, arg: TiffFile | TiffPage | TiffFrame, /, *, index: Sequence[int] | int | None = None, ) -> None: offset: int self.parent = None self._pages = [] # cache of TiffPages, TiffFrames, or their offsets self._indexed = False # True if offsets to all pages were read self._cached = False # True if all pages were read into cache self._tiffpage = TiffPage # class used for reading pages self._keyframe = None # page that is currently used as keyframe self._cache = False # do not cache frames or pages (if not keyframe) self._offset = 0 self._nextpageoffset = None if index is None: self._index = None elif isinstance(index, (int, numpy.integer)): self._index = (int(index),) else: self._index = tuple(index) if isinstance(arg, TiffFile): # read offset to first page from current file position self.parent = arg fh = self.parent.filehandle self._nextpageoffset = fh.tell() offset = struct.unpack( self.parent.tiff.offsetformat, fh.read(self.parent.tiff.offsetsize), )[0] if offset == 0: logger().warning(f'{arg!r} contains no pages') self._indexed = True return elif arg.subifds is not None: # use offsets from SubIFDs tag offsets = arg.subifds self.parent = arg.parent fh = self.parent.filehandle if len(offsets) == 0 or offsets[0] == 0: logger().warning(f'{arg!r} contains invalid SubIFDs') self._indexed = True return offset = offsets[0] else: self._indexed = True return self._offset = offset if offset >= fh.size: logger().warning( f'{self!r} invalid offset to first page {offset!r}' ) self._indexed = True return pageindex: int | tuple[int, ...] = ( 0 if self._index is None else self._index + (0,) ) # read and cache first page fh.seek(offset) page = TiffPage(self.parent, index=pageindex) self._pages.append(page) self._keyframe = page if self._nextpageoffset is None: # offsets from SubIFDs tag self._pages.extend(offsets[1:]) self._indexed = True self._cached = True @property def pages(self) -> list[TiffPage | TiffFrame | int]: """Deprecated. Use the TiffPages sequence interface. :meta private: """ warnings.warn( ' is deprecated since 2024.5.22. ' 'Use the TiffPages sequence interface.', DeprecationWarning, stacklevel=2, ) return self._pages @property def first(self) -> TiffPage: """First page as TiffPage if exists, else raise IndexError.""" return cast(TiffPage, self._pages[0]) @property def is_multipage(self) -> bool: """IFD chain contains more than one page.""" try: self._seek(1) return True except IndexError: return False @property def cache(self) -> bool: """Pages and frames are being cached. When set to *False*, the cache is cleared. """ return self._cache @cache.setter def cache(self, value: bool, /) -> None: value = bool(value) if self._cache and not value: self._clear() self._cache = value @property def useframes(self) -> bool: """Use TiffFrame (True) or TiffPage (False).""" return self._tiffpage == TiffFrame @useframes.setter def useframes(self, value: bool, /) -> None: self._tiffpage = TiffFrame if value else TiffPage @property def keyframe(self) -> TiffPage | None: """TiffPage used as keyframe for new TiffFrames.""" return self._keyframe def set_keyframe(self, index: int, /) -> None: """Set keyframe to TiffPage specified by `index`. If not found in the cache, the TiffPage at `index` is loaded from file and added to the cache. """ if not isinstance(index, (int, numpy.integer)): raise TypeError(f'indices must be integers, not {type(index)}') index = int(index) if index < 0: index %= len(self) if self._keyframe is not None and self._keyframe.index == index: return if index == 0: self._keyframe = cast(TiffPage, self._pages[0]) return if self._indexed or index < len(self._pages): page = self._pages[index] if isinstance(page, TiffPage): self._keyframe = page return if isinstance(page, TiffFrame): # remove existing TiffFrame self._pages[index] = page.offset # load TiffPage from file tiffpage = self._tiffpage self._tiffpage = TiffPage try: self._keyframe = cast(TiffPage, self._getitem(index)) finally: self._tiffpage = tiffpage # always cache keyframes self._pages[index] = self._keyframe @property def next_page_offset(self) -> int | None: """Offset where offset to new page can be stored.""" if not self._indexed: self._seek(-1) return self._nextpageoffset def get( self, key: int, /, default: TiffPage | TiffFrame | None = None, *, validate: int = 0, cache: bool = False, aspage: bool = True, ) -> TiffPage | TiffFrame: """Return specified page from cache or file. The specified TiffPage or TiffFrame is read from file if it is not found in the cache. Parameters: key: Index of requested page in IFD chain. default: Page or frame to return if key is out of bounds. By default, an IndexError is raised if key is out of bounds. validate: If non-zero, raise RuntimeError if value does not match hash of TiffPage or TiffFrame. cache: Store returned page in cache for future use. aspage: Return TiffPage instance. """ try: return self._getitem( key, validate=validate, cache=cache, aspage=aspage ) except IndexError: if default is None: raise return default def _load(self, keyframe: TiffPage | bool | None = True, /) -> None: """Read all remaining pages from file.""" assert self.parent is not None if self._cached: return pages = self._pages if not pages: return if not self._indexed: self._seek(-1) if not self._cache: return fh = self.parent.filehandle if keyframe is not None: keyframe = self._keyframe for i, page in enumerate(pages): if isinstance(page, (int, numpy.integer)): pageindex: int | tuple[int, ...] = ( i if self._index is None else self._index + (i,) ) fh.seek(page) page = self._tiffpage( self.parent, index=pageindex, keyframe=keyframe ) pages[i] = page self._cached = True def _load_virtual_frames(self) -> None: """Calculate virtual TiffFrames.""" assert self.parent is not None pages = self._pages try: if len(pages) > 1: raise ValueError('pages already loaded') page = cast(TiffPage, pages[0]) if not page.is_contiguous: raise ValueError('data not contiguous') self._seek(4) # following pages are int delta = cast(int, pages[2]) - cast(int, pages[1]) if ( cast(int, pages[3]) - cast(int, pages[2]) != delta or cast(int, pages[4]) - cast(int, pages[3]) != delta ): raise ValueError('page offsets not equidistant') page1 = self._getitem(1, validate=page.hash) offsetoffset = page1.dataoffsets[0] - page1.offset if offsetoffset < 0 or offsetoffset > delta: raise ValueError('page offsets not equidistant') pages = [page, page1] filesize = self.parent.filehandle.size - delta for index, offset in enumerate( range(page1.offset + delta, filesize, delta) ): index += 2 d = index * delta dataoffsets = tuple(i + d for i in page.dataoffsets) offset_or_none = offset if offset < 2**31 - 1 else None pages.append( TiffFrame( page.parent, index=( index if self._index is None else self._index + (index,) ), offset=offset_or_none, dataoffsets=dataoffsets, databytecounts=page.databytecounts, keyframe=page, ) ) self._pages = pages self._cache = True self._cached = True self._indexed = True except Exception as exc: if self.parent.filehandle.size >= 2147483648: logger().warning( f'{self!r} <_load_virtual_frames> raised {exc!r:.128}' ) def _clear(self, fully: bool = True, /) -> None: """Delete all but first page from cache. Set keyframe to first page.""" pages = self._pages if not pages: return self._keyframe = cast(TiffPage, pages[0]) if fully: # delete all but first TiffPage/TiffFrame for i, page in enumerate(pages[1:]): if not isinstance(page, int) and page.offset is not None: pages[i + 1] = page.offset else: # delete only TiffFrames for i, page in enumerate(pages): if isinstance(page, TiffFrame) and page.offset is not None: pages[i] = page.offset self._cached = False def _seek(self, index: int, /) -> int: """Seek file to offset of page specified by index and return offset.""" assert self.parent is not None pages = self._pages lenpages = len(pages) if lenpages == 0: raise IndexError('index out of range') fh = self.parent.filehandle if fh.closed: raise ValueError('seek of closed file') if self._indexed or 0 <= index < lenpages: page = pages[index] offset = page if isinstance(page, int) else page.offset return fh.seek(offset) tiff = self.parent.tiff offsetformat = tiff.offsetformat offsetsize = tiff.offsetsize tagnoformat = tiff.tagnoformat tagnosize = tiff.tagnosize tagsize = tiff.tagsize unpack = struct.unpack page = pages[-1] offset = page if isinstance(page, int) else page.offset while lenpages < 2**32: # read offsets to pages from file until index is reached fh.seek(offset) # skip tags try: tagno = int(unpack(tagnoformat, fh.read(tagnosize))[0]) if tagno > 4096: raise TiffFileError(f'suspicious number of tags {tagno}') except Exception as exc: logger().error( f'{self!r} corrupted tag list of page ' f'{lenpages} @{offset} raised {exc!r:.128}', ) del pages[-1] lenpages -= 1 self._indexed = True break self._nextpageoffset = offset + tagnosize + tagno * tagsize fh.seek(self._nextpageoffset) # read offset to next page try: offset = int(unpack(offsetformat, fh.read(offsetsize))[0]) except Exception as exc: logger().error( f'{self!r} invalid offset to page ' f'{lenpages + 1} @{self._nextpageoffset} ' f'raised {exc!r:.128}' ) self._indexed = True break if offset == 0: self._indexed = True break if offset >= fh.size: logger().error(f'{self!r} invalid page offset {offset!r}') self._indexed = True break pages.append(offset) lenpages += 1 if 0 <= index < lenpages: break # detect some circular references if lenpages == 100: for i, p in enumerate(pages[:-1]): if offset == (p if isinstance(p, int) else p.offset): index = i self._pages = pages[: i + 1] self._indexed = True logger().error( f'{self!r} invalid circular reference to IFD ' f'{i} at {offset=}' ) break if index >= lenpages: raise IndexError('index out of range') page = pages[index] return fh.seek(page if isinstance(page, int) else page.offset) def _getlist( self, key: int | slice | Iterable[int] | None = None, /, useframes: bool = True, validate: bool = True, ) -> list[TiffPage | TiffFrame]: """Return specified pages as list of TiffPages or TiffFrames. The first item is a TiffPage, and is used as a keyframe for following TiffFrames. """ getitem = self._getitem _useframes = self.useframes if key is None: key = iter(range(len(self))) elif isinstance(key, (int, numpy.integer)): # return single TiffPage key = int(key) self.useframes = False if key == 0: return [self.first] try: return [getitem(key)] finally: self.useframes = _useframes elif isinstance(key, slice): start, stop, _ = key.indices(2**31 - 1) if not self._indexed and max(stop, start) > len(self._pages): self._seek(-1) key = iter(range(*key.indices(len(self._pages)))) elif isinstance(key, Iterable): key = iter(key) else: raise TypeError( f'key must be an integer, slice, or iterable, not {type(key)}' ) # use first page as keyframe assert self._keyframe is not None keyframe = self._keyframe self.set_keyframe(next(key)) validhash = self._keyframe.hash if validate else 0 if useframes: self.useframes = True try: pages = [getitem(i, validate=validhash) for i in key] pages.insert(0, self._keyframe) finally: # restore state self._keyframe = keyframe if useframes: self.useframes = _useframes return pages def _getitem( self, key: int, /, *, validate: int = 0, # hash cache: bool = False, aspage: bool = False, ) -> TiffPage | TiffFrame: """Return specified page from cache or file.""" assert self.parent is not None key = int(key) pages = self._pages if key < 0: key %= len(self) elif self._indexed and key >= len(pages): raise IndexError(f'index {key} out of range({len(pages)})') tiffpage = TiffPage if aspage else self._tiffpage if key < len(pages): page = pages[key] if self._cache and not aspage: if not isinstance(page, (int, numpy.integer)): if validate and validate != page.hash: raise RuntimeError('page hash mismatch') return page elif isinstance(page, (TiffPage, tiffpage)): # page is not an int if ( validate and validate != page.hash # type: ignore[union-attr] ): raise RuntimeError('page hash mismatch') return page # type: ignore[return-value] pageindex: int | tuple[int, ...] = ( key if self._index is None else self._index + (key,) ) self._seek(key) page = tiffpage(self.parent, index=pageindex, keyframe=self._keyframe) assert isinstance(page, (TiffPage, TiffFrame)) if validate and validate != page.hash: raise RuntimeError('page hash mismatch') if self._cache or cache: pages[key] = page return page @overload def __getitem__(self, key: int, /) -> TiffPage | TiffFrame: ... @overload def __getitem__( self, key: slice | Iterable[int], / ) -> list[TiffPage | TiffFrame]: ... def __getitem__( self, key: int | slice | Iterable[int], / ) -> TiffPage | TiffFrame | list[TiffPage | TiffFrame]: pages = self._pages getitem = self._getitem if isinstance(key, (int, numpy.integer)): key = int(key) if key == 0: return cast(TiffPage, pages[key]) return getitem(key) if isinstance(key, slice): start, stop, _ = key.indices(2**31 - 1) if not self._indexed and max(stop, start) > len(pages): self._seek(-1) return [getitem(i) for i in range(*key.indices(len(pages)))] if isinstance(key, Iterable): return [getitem(k) for k in key] raise TypeError('key must be an integer, slice, or iterable') def __iter__(self) -> Iterator[TiffPage | TiffFrame]: i = 0 while True: try: yield self._getitem(i) i += 1 except IndexError: break if self._cache: self._cached = True def __bool__(self) -> bool: """Return True if file contains any pages.""" return len(self._pages) > 0 def __len__(self) -> int: """Return number of pages in file.""" if not self._indexed: self._seek(-1) return len(self._pages) def __repr__(self) -> str: return f'' @final class TiffTag: """TIFF tag structure. TiffTag instances are not thread-safe. All attributes are read-only. Parameters: parent: TIFF file tag belongs to. offset: Position of tag structure in file. code: Decimal code of tag. dtype: Data type of tag value item. count: Number of items in tag value. valueoffset: Position of tag value in file. """ __slots__ = ( 'parent', 'offset', 'code', 'dtype', 'count', '_value', 'valueoffset', ) parent: TiffFile | TiffWriter """TIFF file tag belongs to.""" offset: int """Position of tag structure in file.""" code: int """Decimal code of tag.""" dtype: int """:py:class:`DATATYPE` of tag value item.""" count: int """Number of items in tag value.""" valueoffset: int """Position of tag value in file.""" _value: Any def __init__( self, parent: TiffFile | TiffWriter, offset: int, code: int, dtype: DATATYPE | int, count: int, value: Any, valueoffset: int, /, ) -> None: self.parent = parent self.offset = int(offset) self.code = int(code) self.count = int(count) self._value = value self.valueoffset = valueoffset try: self.dtype = DATATYPE(dtype) except ValueError: self.dtype = int(dtype) @classmethod def fromfile( cls, parent: TiffFile, /, *, offset: int | None = None, header: bytes | None = None, validate: bool = True, ) -> TiffTag: """Return TiffTag instance from file. Parameters: parent: TiffFile instance tag is read from. offset: Position of tag structure in file. The default is the position of the file handle. header: Tag structure as bytes. The default is read from the file. validate: Raise TiffFileError if data type or value offset are invalid. Raises: TiffFileError: Data type or value offset are invalid and `validate` is *True*. """ tiff = parent.tiff fh = parent.filehandle if header is None: if offset is None: offset = fh.tell() else: fh.seek(offset) header = fh.read(tiff.tagsize) elif offset is None: offset = fh.tell() valueoffset = offset + tiff.tagsize - tiff.tagoffsetthreshold code, dtype, count, value = struct.unpack( tiff.tagformat1 + tiff.tagformat2[1:], header ) try: valueformat = TIFF.DATA_FORMATS[dtype] except KeyError as exc: msg = ( f' ' f'invalid data type {dtype!r}' ) if validate: raise TiffFileError(msg) from exc logger().error(msg) return cls(parent, offset, code, dtype, count, None, 0) valuesize = count * struct.calcsize(valueformat) if ( valuesize > tiff.tagoffsetthreshold or code in TIFF.TAG_READERS # TODO: only works with offsets? ): valueoffset = struct.unpack(tiff.offsetformat, value)[0] if validate and code in TIFF.TAG_LOAD: value = TiffTag._read_value( parent, offset, code, dtype, count, valueoffset ) elif valueoffset < 8 or valueoffset + valuesize > fh.size: msg = ( f' ' f'invalid value offset {valueoffset}' ) if validate: raise TiffFileError(msg) logger().warning(msg) value = None elif code in TIFF.TAG_LOAD: value = TiffTag._read_value( parent, offset, code, dtype, count, valueoffset ) else: value = None elif dtype in {1, 2, 7}: # BYTES, ASCII, UNDEFINED value = value[:valuesize] elif ( tiff.is_ndpi and count == 1 and dtype in {4, 9, 13} and value[4:] != b'\x00\x00\x00\x00' ): # NDPI IFD or LONG, for example, in StripOffsets or StripByteCounts value = struct.unpack(' Any: """Read tag value from file.""" try: valueformat = TIFF.DATA_FORMATS[dtype] except KeyError as exc: raise TiffFileError( f' ' f'invalid data type {dtype!r}' ) from exc fh = parent.filehandle byteorder = parent.tiff.byteorder offsetsize = parent.tiff.offsetsize valuesize = count * struct.calcsize(valueformat) if valueoffset < 8 or valueoffset + valuesize > fh.size: raise TiffFileError( f' ' f'invalid value offset {valueoffset}' ) # if valueoffset % 2: # logger().warning( # f' ' # 'value does not begin on word boundary' # ) fh.seek(valueoffset) if code in TIFF.TAG_READERS: readfunc = TIFF.TAG_READERS[code] try: value = readfunc(fh, byteorder, dtype, count, offsetsize) except Exception as exc: logger().warning( f' raised {exc!r:.128}' ) else: return value if dtype in {1, 2, 7}: # BYTES, ASCII, UNDEFINED value = fh.read(valuesize) if len(value) != valuesize: logger().warning( f' ' 'could not read all values' ) elif code not in TIFF.TAG_TUPLE and count > 1024: value = read_numpy(fh, byteorder, dtype, count, offsetsize) else: value = struct.unpack( f'{byteorder}{count * int(valueformat[0])}{valueformat[1]}', fh.read(valuesize), ) return value @staticmethod def _process_value( value: Any, code: int, dtype: int, offset: int, / ) -> Any: """Process tag value.""" if ( value is None or dtype == 1 # BYTE or dtype == 7 # UNDEFINED or code in TIFF.TAG_READERS or not isinstance(value, (bytes, str, tuple)) ): return value if dtype == 2: # TIFF ASCII fields can contain multiple strings, # each terminated with a NUL value = value.rstrip(b'\x00') try: value = value.decode('utf-8').strip() except UnicodeDecodeError: try: value = value.decode('cp1252').strip() except UnicodeDecodeError as exc: logger().warning( f' ' f'coercing invalid ASCII to bytes, due to {exc!r:.128}' ) return value if code in TIFF.TAG_ENUM: t = TIFF.TAG_ENUM[code] try: value = tuple(t(v) for v in value) except ValueError as exc: if code not in {259, 317}: # ignore compression/predictor logger().warning( f' ' f'raised {exc!r:.128}' ) if len(value) == 1 and code not in TIFF.TAG_TUPLE: value = value[0] return value @property def value(self) -> Any: """Value of tag, delay-loaded from file if necessary.""" if self._value is None: # print( # f'_read_value {self.code} {TIFF.TAGS.get(self.code)} ' # f'{self.dtype}[{self.count}] @{self.valueoffset} ' # ) fh = self.parent.filehandle with fh.lock: closed = fh.closed if closed: # this is an inefficient resort in case a user delay loads # tag values from a TiffPage with a closed FileHandle. warnings.warn( f'{self!r} reading value from closed file', UserWarning ) fh.open() try: value = TiffTag._read_value( self.parent, self.offset, self.code, self.dtype, self.count, self.valueoffset, ) finally: if closed: fh.close() self._value = TiffTag._process_value( value, self.code, self.dtype, self.offset, ) return self._value @value.setter def value(self, value: Any, /) -> None: self._value = value @property def dtype_name(self) -> str: """Name of data type of tag value.""" try: return self.dtype.name # type: ignore[attr-defined] except AttributeError: return f'TYPE{self.dtype}' @property def name(self) -> str: """Name of tag from :py:attr:`_TIFF.TAGS` registry.""" return TIFF.TAGS.get(self.code, str(self.code)) @property def dataformat(self) -> str: """Data type as `struct.pack` format.""" return TIFF.DATA_FORMATS[self.dtype] @property def valuebytecount(self) -> int: """Number of bytes of tag value in file.""" return self.count * struct.calcsize(TIFF.DATA_FORMATS[self.dtype]) def astuple(self) -> TagTuple: """Return tag code, dtype, count, and encoded value. The encoded value is read from file if necessary. """ if isinstance(self.value, bytes): value = self.value else: tiff = self.parent.tiff dataformat = TIFF.DATA_FORMATS[self.dtype] count = self.count * int(dataformat[0]) fmt = f'{tiff.byteorder}{count}{dataformat[1]}' try: if self.dtype == 2: # ASCII value = struct.pack(fmt, self.value.encode('ascii')) if len(value) != count: raise ValueError elif count == 1 and not isinstance(self.value, tuple): value = struct.pack(fmt, self.value) else: value = struct.pack(fmt, *self.value) except Exception as exc: if tiff.is_ndpi and count == 1: raise ValueError( 'cannot pack 64-bit NDPI value to 32-bit dtype' ) from exc fh = self.parent.filehandle pos = fh.tell() fh.seek(self.valueoffset) value = fh.read(struct.calcsize(fmt)) fh.seek(pos) return self.code, int(self.dtype), self.count, value, True def overwrite( self, value: Any, /, *, dtype: DATATYPE | int | str | None = None, erase: bool = True, ) -> TiffTag: """Write new tag value to file and return new TiffTag instance. Warning: changing tag values in TIFF files might result in corrupted files or have unexpected side effects. The packed value is appended to the file if it is longer than the old value. The file position is left where it was. Overwriting tag values in NDPI files > 4 GB is only supported if single integer values and new offsets do not exceed the 32-bit range. Parameters: value: New tag value to write. Must be compatible with the `struct.pack` formats corresponding to the tag's data type. dtype: New tag data type. By default, the data type is not changed. erase: Overwrite previous tag values in file with zeros. Raises: struct.error: Value is not compatible with dtype or new offset exceeds TIFF size limit. ValueError: Invalid value or dtype, or old integer value in NDPI files exceeds 32-bit range. """ if self.offset < 8 or self.valueoffset < 8: raise ValueError(f'cannot rewrite tag at offset {self.offset} < 8') if hasattr(value, 'filehandle'): # passing a TiffFile instance is deprecated and no longer required # since 2021.7.30 raise TypeError( 'TiffTag.overwrite got an unexpected TiffFile instance ' 'as first argument' ) fh = self.parent.filehandle tiff = self.parent.tiff if tiff.is_ndpi: # only support files < 4GB if self.count == 1 and self.dtype in {4, 13}: if isinstance(self.value, tuple): v = self.value[0] else: v = self.value if v > 4294967295: raise ValueError('cannot patch NDPI > 4 GB files') tiff = TIFF.CLASSIC_LE if value is None: value = b'' if dtype is None: dtype = self.dtype elif isinstance(dtype, str): if len(dtype) > 1 and dtype[0] in '<>|=': dtype = dtype[1:] try: dtype = TIFF.DATA_DTYPES[dtype] except KeyError as exc: raise ValueError(f'unknown data type {dtype!r}') from exc else: dtype = enumarg(DATATYPE, dtype) packedvalue: bytes | None = None dataformat: str try: dataformat = TIFF.DATA_FORMATS[dtype] except KeyError as exc: raise ValueError(f'unknown data type {dtype!r}') from exc if dtype == 2: # strings if isinstance(value, str): # enforce 7-bit ASCII on Unicode strings try: value = value.encode('ascii') except UnicodeEncodeError as exc: raise ValueError( 'TIFF strings must be 7-bit ASCII' ) from exc elif not isinstance(value, bytes): raise ValueError('TIFF strings must be 7-bit ASCII') if len(value) == 0 or value[-1:] != b'\x00': value += b'\x00' count = len(value) value = (value,) elif isinstance(value, bytes): # pre-packed binary data dtsize = struct.calcsize(dataformat) if len(value) % dtsize: raise ValueError('invalid packed binary data') count = len(value) // dtsize packedvalue = value value = (value,) else: try: count = len(value) except TypeError: value = (value,) count = 1 if dtype in {5, 10}: if count < 2 or count % 2: raise ValueError('invalid RATIONAL value') count //= 2 # rational if packedvalue is None: packedvalue = struct.pack( f'{tiff.byteorder}{count * int(dataformat[0])}{dataformat[1]}', *value, ) newsize = len(packedvalue) oldsize = self.count * struct.calcsize(TIFF.DATA_FORMATS[self.dtype]) valueoffset = self.valueoffset pos = fh.tell() try: if dtype != self.dtype: # rewrite data type fh.seek(self.offset + 2) fh.write(struct.pack(tiff.byteorder + 'H', dtype)) if oldsize <= tiff.tagoffsetthreshold: if newsize <= tiff.tagoffsetthreshold: # inline -> inline: overwrite fh.seek(self.offset + 4) fh.write(struct.pack(tiff.tagformat2, count, packedvalue)) else: # inline -> separate: append to file fh.seek(0, os.SEEK_END) valueoffset = fh.tell() if valueoffset % 2: # value offset must begin on a word boundary fh.write(b'\x00') valueoffset += 1 # write new offset fh.seek(self.offset + 4) fh.write( struct.pack( tiff.tagformat2, count, struct.pack(tiff.offsetformat, valueoffset), ) ) # write new value fh.seek(valueoffset) fh.write(packedvalue) elif newsize <= tiff.tagoffsetthreshold: # separate -> inline: erase old value valueoffset = ( self.offset + 4 + struct.calcsize(tiff.tagformat2[:2]) ) fh.seek(self.offset + 4) fh.write(struct.pack(tiff.tagformat2, count, packedvalue)) if erase: fh.seek(self.valueoffset) fh.write(b'\x00' * oldsize) elif newsize <= oldsize or self.valueoffset + oldsize == fh.size: # separate -> separate smaller: overwrite, erase remaining fh.seek(self.offset + 4) fh.write(struct.pack(tiff.tagformat2[:2], count)) fh.seek(self.valueoffset) fh.write(packedvalue) if erase and oldsize - newsize > 0: fh.write(b'\x00' * (oldsize - newsize)) else: # separate -> separate larger: erase old value, append to file fh.seek(0, os.SEEK_END) valueoffset = fh.tell() if valueoffset % 2: # value offset must begin on a word boundary fh.write(b'\x00') valueoffset += 1 # write offset fh.seek(self.offset + 4) fh.write( struct.pack( tiff.tagformat2, count, struct.pack(tiff.offsetformat, valueoffset), ) ) # write value fh.seek(valueoffset) fh.write(packedvalue) if erase: fh.seek(self.valueoffset) fh.write(b'\x00' * oldsize) finally: fh.seek(pos) # must restore file position return TiffTag( self.parent, self.offset, self.code, dtype, count, value, valueoffset, ) def _fix_lsm_bitspersample(self) -> None: """Correct LSM bitspersample tag. Old LSM writers may use a separate region for two 16-bit values, although they fit into the tag value element of the tag. """ if self.code != 258 or self.count != 2: return # TODO: test this case; need example file logger().warning(f'{self!r} correcting LSM bitspersample tag') value = struct.pack(' str: name = '|'.join(TIFF.TAGS.getall(self.code, [])) if name: name = ' ' + name return f'' def __str__(self) -> str: return self._str() def _str(self, detail: int = 0, width: int = 79) -> str: """Return string containing information about TiffTag.""" height = 1 if detail <= 0 else 8 * detail dtype = self.dtype_name if self.count > 1: dtype += f'[{self.count}]' name = '|'.join(TIFF.TAGS.getall(self.code, [])) if name: name = f'{self.code} {name} @{self.offset}' else: name = f'{self.code} @{self.offset}' line = f'TiffTag {name} {dtype} @{self.valueoffset} ' line = line[:width] try: value = self.value except TiffFileError: value = 'CORRUPTED' else: try: if self.count == 1: value = enumstr(value) else: value = pformat(tuple(enumstr(v) for v in value)) except Exception: if not isinstance(value, (tuple, list)): pass elif height == 1: value = value[:256] elif len(value) > 2048: value = ( value[:1024] + value[-1024:] # type: ignore[operator] ) value = pformat(value, width=width, height=height) if detail <= 0: line += '= ' line += value[:width] line = line[:width] else: line += '\n' + value return line @final class TiffTags: """Multidict-like interface to TiffTag instances in TiffPage. Differences to a regular dict: - values are instances of :py:class:`TiffTag`. - keys are :py:attr:`TiffTag.code` (int). - multiple values can be stored per key. - can be indexed by :py:attr:`TiffTag.name` (`str`), slower than by key. - `iter()` returns values instead of keys. - `values()` and `items()` contain all values sorted by offset. - `len()` returns number of all values. - `get()` takes optional index argument. - some functions are not implemented, such as, `update` and `pop`. """ __slots__ = ('_dict', '_list') _dict: dict[int, TiffTag] _list: list[dict[int, TiffTag]] def __init__(self) -> None: self._dict = {} self._list = [self._dict] def add(self, tag: TiffTag, /) -> None: """Add tag.""" code = tag.code for d in self._list: if code not in d: d[code] = tag break else: self._list.append({code: tag}) def keys(self) -> list[int]: """Return codes of all tags.""" return list(self._dict.keys()) def values(self) -> list[TiffTag]: """Return all tags in order they are stored in file.""" tags = (t for d in self._list for t in d.values()) return sorted(tags, key=lambda t: t.offset) def items(self) -> list[tuple[int, TiffTag]]: """Return all (code, tag) pairs in order tags are stored in file.""" items = (i for d in self._list for i in d.items()) return sorted(items, key=lambda i: i[1].offset) def valueof( self, key: int | str, /, default: Any = None, index: int | None = None, ) -> Any: """Return value of tag by code or name if exists, else default. Parameters: key: Code or name of tag to return. default: Another value to return if specified tag is corrupted or not found. index: Specifies tag in case of multiple tags with identical code. The default is the first tag. """ tag = self.get(key, default=None, index=index) if tag is None: return default try: return tag.value except TiffFileError: return default # corrupted tag def get( self, key: int | str, /, default: TiffTag | None = None, index: int | None = None, ) -> TiffTag | None: """Return tag by code or name if exists, else default. Parameters: key: Code or name of tag to return. default: Another tag to return if specified tag is corrupted or not found. index: Specifies tag in case of multiple tags with identical code. The default is the first tag. """ if index is None: if key in self._dict: return self._dict[cast(int, key)] if not isinstance(key, str): return default index = 0 try: tags = self._list[index] except IndexError: return default if key in tags: return tags[cast(int, key)] if not isinstance(key, str): return default for tag in tags.values(): if tag.name == key: return tag return default def getall( self, key: int | str, /, default: Any = None, ) -> list[TiffTag] | None: """Return list of all tags by code or name if exists, else default. Parameters: key: Code or name of tags to return. default: Value to return if no tags are found. """ result: list[TiffTag] = [] for tags in self._list: if key in tags: result.append(tags[cast(int, key)]) else: break if result: return result if not isinstance(key, str): return default for tags in self._list: for tag in tags.values(): if tag.name == key: result.append(tag) break if not result: break return result if result else default def __getitem__(self, key: int | str, /) -> TiffTag: """Return first tag by code or name. Raise KeyError if not found.""" if key in self._dict: return self._dict[cast(int, key)] if not isinstance(key, str): raise KeyError(key) for tag in self._dict.values(): if tag.name == key: return tag raise KeyError(key) def __setitem__(self, code: int, tag: TiffTag, /) -> None: """Add tag.""" assert tag.code == code self.add(tag) def __delitem__(self, key: int | str, /) -> None: """Delete all tags by code or name.""" found = False for tags in self._list: if key in tags: found = True del tags[cast(int, key)] else: break if found: return if not isinstance(key, str): raise KeyError(key) for tags in self._list: for tag in tags.values(): if tag.name == key: del tags[tag.code] found = True break else: break if not found: raise KeyError(key) return def __contains__(self, item: object, /) -> bool: """Return if tag is in map.""" if item in self._dict: return True if not isinstance(item, str): return False for tag in self._dict.values(): if tag.name == item: return True return False def __iter__(self) -> Iterator[TiffTag]: """Return iterator over all tags.""" return iter(self.values()) def __len__(self) -> int: """Return number of tags.""" size = 0 for d in self._list: size += len(d) return size def __repr__(self) -> str: return f'' def __str__(self) -> str: return self._str() def _str(self, detail: int = 0, width: int = 79) -> str: """Return string with information about TiffTags.""" info = [] tlines = [] vlines = [] for tag in self: value = tag._str(width=width + 1) tlines.append(value[:width].strip()) if detail > 0 and len(value) > width: try: value = tag.value except Exception: # delay load failed or closed file continue if tag.code in {273, 279, 324, 325}: if detail < 1: value = value[:256] elif len(value) > 1024: value = value[:512] + value[-512:] value = pformat(value, width=width, height=detail * 3) else: value = pformat(value, width=width, height=detail * 8) if tag.count > 1: vlines.append( f'{tag.name} {tag.dtype_name}[{tag.count}]\n{value}' ) else: vlines.append(f'{tag.name}\n{value}') info.append('\n'.join(tlines)) if detail > 0 and vlines: info.append('\n') info.append('\n\n'.join(vlines)) return '\n'.join(info) @final class TiffTagRegistry: """Registry of TIFF tag codes and names. Map tag codes and names to names and codes respectively. One tag code may be registered with several names, for example, 34853 is used for GPSTag or OlympusSIS2. Different tag codes may be registered with the same name, for example, 37387 and 41483 are both named FlashEnergy. Parameters: arg: Mapping of codes to names. Examples: >>> tags = TiffTagRegistry([(34853, 'GPSTag'), (34853, 'OlympusSIS2')]) >>> tags.add(37387, 'FlashEnergy') >>> tags.add(41483, 'FlashEnergy') >>> tags['GPSTag'] 34853 >>> tags[34853] 'GPSTag' >>> tags.getall(34853) ['GPSTag', 'OlympusSIS2'] >>> tags.getall('FlashEnergy') [37387, 41483] >>> len(tags) 4 """ __slots__ = ('_dict', '_list') _dict: dict[int | str, str | int] _list: list[dict[int | str, str | int]] def __init__( self, arg: TiffTagRegistry | dict[int, str] | Sequence[tuple[int, str]], /, ) -> None: self._dict = {} self._list = [self._dict] self.update(arg) def update( self, arg: TiffTagRegistry | dict[int, str] | Sequence[tuple[int, str]], /, ) -> None: """Add mapping of codes to names to registry. Parameters: arg: Mapping of codes to names. """ if isinstance(arg, TiffTagRegistry): self._list.extend(arg._list) return if isinstance(arg, dict): arg = list(arg.items()) for code, name in arg: self.add(code, name) def add(self, code: int, name: str, /) -> None: """Add code and name to registry.""" for d in self._list: if code in d and d[code] == name: break if code not in d and name not in d: d[code] = name d[name] = code break else: self._list.append({code: name, name: code}) def items(self) -> list[tuple[int, str]]: """Return all registry items as (code, name).""" items = ( i for d in self._list for i in d.items() if isinstance(i[0], int) ) return sorted(items, key=lambda i: i[0]) # type: ignore[arg-type] @overload def get(self, key: int, /, default: None) -> str | None: ... @overload def get(self, key: str, /, default: None) -> int | None: ... @overload def get(self, key: int, /, default: str) -> str: ... def get( self, key: int | str, /, default: str | None = None ) -> str | int | None: """Return first code or name if exists, else default. Parameters: key: tag code or name to lookup. default: value to return if key is not found. """ for d in self._list: if key in d: return d[key] return default @overload def getall(self, key: int, /, default: None) -> list[str] | None: ... @overload def getall(self, key: str, /, default: None) -> list[int] | None: ... @overload def getall(self, key: int, /, default: list[str]) -> list[str]: ... def getall( self, key: int | str, /, default: list[str] | None = None ) -> list[str] | list[int] | None: """Return list of all codes or names if exists, else default. Parameters: key: tag code or name to lookup. default: value to return if key is not found. """ result = [d[key] for d in self._list if key in d] return result if result else default # type: ignore[return-value] @overload def __getitem__(self, key: int, /) -> str: ... @overload def __getitem__(self, key: str, /) -> int: ... def __getitem__(self, key: int | str, /) -> int | str: """Return first code or name. Raise KeyError if not found.""" for d in self._list: if key in d: return d[key] raise KeyError(key) def __delitem__(self, key: int | str, /) -> None: """Delete all tags of code or name.""" found = False for d in self._list: if key in d: found = True value = d[key] del d[key] del d[value] if not found: raise KeyError(key) def __contains__(self, item: int | str, /) -> bool: """Return if code or name is in registry.""" for d in self._list: if item in d: return True return False def __iter__(self) -> Iterator[tuple[int, str]]: """Return iterator over all items in registry.""" return iter(self.items()) def __len__(self) -> int: """Return number of registered tags.""" size = 0 for d in self._list: size += len(d) return size // 2 def __repr__(self) -> str: return f'' def __str__(self) -> str: return 'TiffTagRegistry(((\n {}\n))'.format( ',\n '.join(f'({code}, {name!r})' for code, name in self.items()) ) @final class TiffPageSeries(Sequence[TiffPage | TiffFrame | None]): """Sequence of TIFF pages making up multi-dimensional image. Many TIFF based formats, such as OME-TIFF, use series of TIFF pages to store chunks of larger, multi-dimensional images. The image shape and position of chunks in the multi-dimensional image is defined in format-specific metadata. All pages in a series must have the same :py:meth:`TiffPage.hash`, that is, the same shape, data type, and storage properties. Items of a series may be None (missing) or instances of :py:class:`TiffPage` or :py:class:`TiffFrame`, possibly belonging to different files. Parameters: pages: List of TiffPage, TiffFrame, or None. The file handles of TiffPages or TiffFrames may not be open. shape: Shape of image array in series. dtype: Data type of image array in series. axes: Character codes for dimensions in shape. Length must match shape. attr: Arbitrary metadata associated with series. index: Index of series in multi-series files. parent: TiffFile instance series belongs to. name: Name of series. kind: Nature of series, such as, 'ome' or 'imagej'. truncated: Series is truncated, for example, ImageJ hyperstack > 4 GB. multifile: Series contains pages from multiple files. squeeze: Remove length-1 dimensions (except X and Y) from shape and axes by default. transform: Function to transform image data after decoding. """ levels: list[TiffPageSeries] """Multi-resolution, pyramidal levels. ``levels[0] is self``.""" parent: TiffFile | None """TiffFile instance series belongs to.""" keyframe: TiffPage """TiffPage of series.""" dtype: numpy.dtype[Any] """Data type (native byte order) of image array in series.""" kind: str """Nature of series.""" name: str """Name of image series from metadata.""" transform: Callable[[NDArray[Any]], NDArray[Any]] | None """Function to transform image data after decoding.""" is_multifile: bool """Series contains pages from multiple files.""" is_truncated: bool """Series contains single page describing multi-dimensional image.""" _pages: list[TiffPage | TiffFrame | None] # List of pages in series. # Might contain only first page of contiguous series _index: int # index of series in multi-series files _squeeze: bool _axes: str _axes_squeezed: str _shape: tuple[int, ...] _shape_squeezed: tuple[int, ...] _len: int _attr: dict[str, Any] def __init__( self, pages: Sequence[TiffPage | TiffFrame | None], /, shape: Sequence[int] | None = None, dtype: DTypeLike | None = None, axes: str | None = None, *, attr: dict[str, Any] | None = None, coords: Mapping[str, NDArray[Any] | None] | None = None, index: int | None = None, parent: TiffFile | None = None, name: str | None = None, kind: str | None = None, truncated: bool = False, multifile: bool = False, squeeze: bool = True, transform: Callable[[NDArray[Any]], NDArray[Any]] | None = None, ) -> None: self._shape = () self._shape_squeezed = () self._axes = '' self._axes_squeezed = '' self._attr = {} if attr is None else dict(attr) self._index = int(index) if index else 0 self._pages = list(pages) self.levels = [self] npages = len(self._pages) try: # find open TiffPage keyframe = next( p.keyframe for p in self._pages if p is not None and p.keyframe is not None and not p.keyframe.parent.filehandle.closed ) except StopIteration: keyframe = next( p.keyframe for p in self._pages if p is not None and p.keyframe is not None ) if shape is None: shape = keyframe.shape if axes is None: axes = keyframe.axes if dtype is None: dtype = keyframe.dtype self.dtype = numpy.dtype(dtype) self.kind = kind if kind else '' self.name = name if name else '' self.transform = transform self.keyframe = keyframe self.is_multifile = bool(multifile) self.is_truncated = bool(truncated) if parent is not None: self.parent = parent elif self._pages: self.parent = self.keyframe.parent else: self.parent = None self._set_dimensions(shape, axes, coords, squeeze) if not truncated and npages == 1: s = product(keyframe.shape) if s > 0: self._len = int(product(self.shape) // s) else: self._len = npages else: self._len = npages def _set_dimensions( self, shape: Sequence[int], axes: str, coords: Mapping[str, NDArray[Any] | None] | None = None, squeeze: bool = True, /, ) -> None: """Set shape, axes, and coords.""" self._squeeze = bool(squeeze) self._shape = tuple(shape) self._axes = axes self._shape_squeezed, self._axes_squeezed, _ = squeeze_axes( shape, axes ) @property def shape(self) -> tuple[int, ...]: """Shape of image array in series.""" return self._shape_squeezed if self._squeeze else self._shape @property def axes(self) -> str: """Character codes for dimensions in image array.""" return self._axes_squeezed if self._squeeze else self._axes @property def coords(self) -> dict[str, NDArray[Any]]: """Ordered map of dimension names to coordinate arrays.""" raise NotImplementedError # return { # name: numpy.arange(size) # for name, size in zip(self.dims, self.shape) # } def get_shape(self, squeeze: bool | None = None) -> tuple[int, ...]: """Return default, squeezed, or expanded shape of series. Parameters: squeeze: Remove length-1 dimensions from shape. """ if squeeze is None: squeeze = self._squeeze return self._shape_squeezed if squeeze else self._shape def get_axes(self, squeeze: bool | None = None) -> str: """Return default, squeezed, or expanded axes of series. Parameters: squeeze: Remove length-1 dimensions from axes. """ if squeeze is None: squeeze = self._squeeze return self._axes_squeezed if squeeze else self._axes def get_coords( self, squeeze: bool | None = None ) -> dict[str, NDArray[Any]]: """Return default, squeezed, or expanded coords of series. Parameters: squeeze: Remove length-1 dimensions from coords. """ raise NotImplementedError def asarray( self, *, level: int | None = None, **kwargs: Any ) -> NDArray[Any]: """Return images from series of pages as NumPy array. Parameters: level: Pyramid level to return. By default, the base layer is returned. **kwargs: Additional arguments passed to :py:meth:`TiffFile.asarray`. """ if self.parent is None: raise ValueError('no parent') if level is not None: return self.levels[level].asarray(**kwargs) result = self.parent.asarray(series=self, **kwargs) if self.transform is not None: result = self.transform(result) return result def aszarr( self, *, level: int | None = None, **kwargs: Any ) -> ZarrTiffStore: """Return image array from series of pages as Zarr 2 store. Parameters: level: Pyramid level to return. By default, a multi-resolution store is returned. **kwargs: Additional arguments passed to :py:class:`ZarrTiffStore`. """ if self.parent is None: raise ValueError('no parent') return ZarrTiffStore(self, level=level, **kwargs) @cached_property def dataoffset(self) -> int | None: """Offset to contiguous image data in file.""" if not self._pages: return None pos = 0 for page in self._pages: if page is None or len(page.dataoffsets) == 0: return None if not page.is_final: return None if not pos: pos = page.dataoffsets[0] + page.nbytes continue if pos != page.dataoffsets[0]: return None pos += page.nbytes page = self._pages[0] if page is None or len(page.dataoffsets) == 0: return None offset = page.dataoffsets[0] if ( len(self._pages) == 1 and isinstance(page, TiffPage) and (page.is_imagej or page.is_shaped or page.is_stk) ): # truncated files return offset if pos == offset + product(self.shape) * self.dtype.itemsize: return offset return None @property def is_pyramidal(self) -> bool: """Series contains multiple resolutions.""" return len(self.levels) > 1 @cached_property def attr(self) -> dict[str, Any]: """Arbitrary metadata associated with series.""" return self._attr @property def ndim(self) -> int: """Number of array dimensions.""" return len(self.shape) @property def dims(self) -> tuple[str, ...]: """Names of dimensions in image array.""" # return tuple(self.coords.keys()) return tuple( unique_strings(TIFF.AXES_NAMES.get(ax, ax) for ax in self.axes) ) @property def sizes(self) -> dict[str, int]: """Ordered map of dimension names to lengths.""" # return dict(zip(self.coords.keys(), self.shape)) return dict(zip(self.dims, self.shape)) @cached_property def size(self) -> int: """Number of elements in array.""" return product(self.shape) @cached_property def nbytes(self) -> int: """Number of bytes in array.""" return self.size * self.dtype.itemsize @property def pages(self) -> TiffPageSeries: # sequence of TiffPages or TiffFrame in series # a workaround to keep the old interface working return self def _getitem(self, key: int, /) -> TiffPage | TiffFrame | None: """Return specified page of series from cache or file.""" key = int(key) if key < 0: key %= self._len if len(self._pages) == 1 and 0 < key < self._len: page = self._pages[0] assert page is not None assert self.parent is not None return self.parent.pages._getitem(page.index + key) return self._pages[key] @overload def __getitem__( self, key: int | numpy.integer[Any], / ) -> TiffPage | TiffFrame | None: ... @overload def __getitem__( self, key: slice | Iterable[int], / ) -> list[TiffPage | TiffFrame | None]: ... def __getitem__( self, key: int | numpy.integer[Any] | slice | Iterable[int], / ) -> TiffPage | TiffFrame | list[TiffPage | TiffFrame | None] | None: """Return specified page(s).""" if isinstance(key, (int, numpy.integer)): return self._getitem(int(key)) if isinstance(key, slice): return [self._getitem(i) for i in range(*key.indices(self._len))] if isinstance(key, Iterable) and not isinstance(key, str): return [self._getitem(k) for k in key] raise TypeError('key must be an integer, slice, or iterable') def __iter__(self) -> Iterator[TiffPage | TiffFrame | None]: """Return iterator over pages in series.""" if len(self._pages) == self._len: yield from self._pages else: assert self.parent is not None and self._pages[0] is not None pages = self.parent.pages index = self._pages[0].index for i in range(self._len): yield pages[index + i] def __len__(self) -> int: """Return number of pages in series.""" return self._len def __repr__(self) -> str: return f'' def __str__(self) -> str: s = ' '.join( s for s in ( snipstr(f'{self.name!r}', 20) if self.name else '', 'x'.join(str(i) for i in self.shape), str(self.dtype), self.axes, self.kind, (f'{len(self.levels)} Levels') if self.is_pyramidal else '', f'{len(self)} Pages', (f'@{self.dataoffset}') if self.dataoffset else '', ) if s ) return f'TiffPageSeries {self._index} {s}' # TODO: this interface does not expose index keys except in __getitem__ class ZarrStore(MutableMapping[str, bytes]): """Zarr 2 store base class. ZarrStore instances must be closed with :py:meth:`ZarrStore.close`, which is automatically called when using the 'with' context manager. Parameters: fillvalue: Value to use for missing chunks of Zarr 2 store. The default is 0. chunkmode: Specifies how to chunk data. References: 1. https://zarr.readthedocs.io/en/stable/spec/v2.html 2. https://forum.image.sc/t/multiscale-arrays-v0-1/37930 """ _store: dict[str, Any] _fillvalue: int | float _chunkmode: int def __init__( self, /, *, fillvalue: int | float | None = None, chunkmode: CHUNKMODE | int | str | None = None, ) -> None: self._store = {} self._fillvalue = 0 if fillvalue is None else fillvalue if chunkmode is None: self._chunkmode = CHUNKMODE(0) else: self._chunkmode = enumarg(CHUNKMODE, chunkmode) def __enter__(self) -> ZarrStore: return self def __exit__(self, exc_type: Any, exc_value: Any, traceback: Any) -> None: self.close() def __del__(self) -> None: self.close() def close(self) -> None: """Close ZarrStore.""" def flush(self) -> None: """Flush ZarrStore.""" raise PermissionError('ZarrStore is read-only') def clear(self) -> None: """Clear ZarrStore.""" raise PermissionError('ZarrStore is read-only') def keys(self) -> KeysView[str]: """Return keys in ZarrStore.""" return self._store.keys() def items(self) -> ItemsView[str, Any]: """Return items in ZarrStore.""" return self._store.items() def values(self) -> ValuesView[Any]: """Return values in ZarrStore.""" return self._store.values() def __iter__(self) -> Iterator[str]: return iter(self._store) def __len__(self) -> int: return len(self._store) def __contains__(self, key: object, /) -> bool: if key in self._store: return True assert isinstance(key, str) return self._contains(key) def _contains(self, key: str, /) -> bool: """Return if key is in store.""" raise NotImplementedError def __delitem__(self, key: object, /) -> None: raise PermissionError('ZarrStore is read-only') def __getitem__(self, key: str, /) -> Any: if key in self._store: return self._store[key] if key[-7:] == '.zarray' or key[-7:] == '.zgroup': # catch '.zarray' and 'attribute/.zarray' raise KeyError(key) return self._getitem(key) def _getitem(self, key: str, /) -> NDArray[Any]: """Return chunk from file.""" raise NotImplementedError def __setitem__(self, key: str, value: bytes, /) -> None: if key in self._store: raise KeyError(key) if key[-7:] == '.zarray' or key[-7:] == '.zgroup': # catch '.zarray' and 'attribute/.zarray' raise KeyError(key) return self._setitem(key, value) def _setitem(self, key: str, value: bytes, /) -> None: """Write chunk to file.""" raise NotImplementedError @property def is_multiscales(self) -> bool: """Return if ZarrStore is multi-scales.""" return b'multiscales' in self._store['.zattrs'] @staticmethod def _empty_chunk( shape: tuple[int, ...], dtype: DTypeLike, fillvalue: int | float | None, /, ) -> NDArray[Any]: """Return empty chunk.""" if fillvalue is None or fillvalue == 0: # return bytes(product(shape) * dtype.itemsize) return numpy.zeros(shape, dtype) chunk = numpy.empty(shape, dtype) chunk[:] = fillvalue return chunk # .tobytes() @staticmethod def _dtype_str(dtype: numpy.dtype[Any], /) -> str: """Return dtype as string with native byte order.""" if dtype.itemsize == 1: byteorder = '|' else: byteorder = {'big': '>', 'little': '<'}[sys.byteorder] return byteorder + dtype.str[1:] @staticmethod def _json(obj: Any, /) -> bytes: """Serialize object to JSON formatted string.""" return json.dumps( obj, indent=1, sort_keys=True, ensure_ascii=True, separators=(',', ': '), ).encode('ascii') @staticmethod def _value(value: Any, dtype: numpy.dtype[Any], /) -> Any: """Return value which is serializable to JSON.""" if value is None: return value if dtype.kind == 'b': return bool(value) if dtype.kind in 'ui': return int(value) if dtype.kind == 'f': if numpy.isnan(value): return 'NaN' if numpy.isposinf(value): return 'Infinity' if numpy.isneginf(value): return '-Infinity' return float(value) if dtype.kind == 'c': value = numpy.array(value, dtype) return ( ZarrStore._value(value.real, dtype.type().real.dtype), ZarrStore._value(value.imag, dtype.type().imag.dtype), ) return value @staticmethod def _ndindex( shape: tuple[int, ...], chunks: tuple[int, ...], / ) -> Iterator[str]: """Return iterator over all chunk index strings.""" assert len(shape) == len(chunks) chunked = tuple( i // j + (1 if i % j else 0) for i, j in zip(shape, chunks) ) for indices in numpy.ndindex(chunked): yield '.'.join(str(index) for index in indices) @final class ZarrTiffStore(ZarrStore): """Zarr 2 store interface to image array in TiffPage or TiffPageSeries. ZarrTiffStore is using a TiffFile instance for reading and decoding chunks. Therefore, ZarrTiffStore instances cannot be pickled. For writing, image data must be stored in uncompressed, unpredicted, and unpacked form. Sparse strips and tiles are not written. Parameters: arg: TIFF page or series to wrap as Zarr 2 store. level: Pyramidal level to wrap. The default is 0. chunkmode: Use strips or tiles (0) or whole page data (2) as chunks. The default is 0. fillvalue: Value to use for missing chunks. The default is 0. zattrs: Additional attributes to store in `.zattrs`. multiscales: Create a multiscales compatible Zarr 2 group store. By default, create a Zarr 2 array store for pages and non-pyramidal series. lock: Reentrant lock to synchronize seeks and reads from file. By default, the lock of the parent's file handle is used. squeeze: Remove length-1 dimensions from shape of TiffPageSeries. maxworkers: Maximum number of threads to concurrently decode strips or tiles if `chunkmode=2`. If *None* or *0*, use up to :py:attr:`_TIFF.MAXWORKERS` threads. buffersize: Approximate number of bytes to read from file in one pass if `chunkmode=2`. The default is :py:attr:`_TIFF.BUFFERSIZE`. _openfiles: Internal API. """ _data: list[TiffPageSeries] _filecache: FileCache _transform: Callable[[NDArray[Any]], NDArray[Any]] | None _maxworkers: int | None _buffersize: int | None _squeeze: bool | None _writable: bool _multiscales: bool def __init__( self, arg: TiffPage | TiffFrame | TiffPageSeries, /, *, level: int | None = None, chunkmode: CHUNKMODE | int | str | None = None, fillvalue: int | float | None = None, zattrs: dict[str, Any] | None = None, multiscales: bool | None = None, lock: threading.RLock | NullContext | None = None, squeeze: bool | None = None, maxworkers: int | None = None, buffersize: int | None = None, _openfiles: int | None = None, ) -> None: super().__init__(fillvalue=fillvalue, chunkmode=chunkmode) if self._chunkmode not in {0, 2}: raise NotImplementedError(f'{self._chunkmode!r} not implemented') self._squeeze = None if squeeze is None else bool(squeeze) self._maxworkers = maxworkers self._buffersize = buffersize if isinstance(arg, TiffPageSeries): self._data = arg.levels self._transform = arg.transform if multiscales is not None and not multiscales: level = 0 if level is not None: self._data = [self._data[level]] name = arg.name else: self._data = [TiffPageSeries([arg])] self._transform = None name = 'Unnamed' fh = self._data[0].keyframe.parent._parent.filehandle self._writable = fh.writable() and self._chunkmode == 0 if lock is None: fh.set_lock(True) lock = fh.lock self._filecache = FileCache(size=_openfiles, lock=lock) zattrs = {} if zattrs is None else dict(zattrs) # TODO: Zarr Encoding Specification # https://xarray.pydata.org/en/stable/internals/zarr-encoding-spec.html if multiscales or len(self._data) > 1: # multiscales self._multiscales = True if '_ARRAY_DIMENSIONS' in zattrs: array_dimensions = zattrs.pop('_ARRAY_DIMENSIONS') else: array_dimensions = list(self._data[0].get_axes(squeeze)) self._store['.zgroup'] = ZarrStore._json({'zarr_format': 2}) self._store['.zattrs'] = ZarrStore._json( { # TODO: use https://ngff.openmicroscopy.org/latest/ 'multiscales': [ { 'version': '0.1', 'name': name, 'datasets': [ {'path': str(i)} for i in range(len(self._data)) ], # 'axes': [...] # 'type': 'unknown', 'metadata': {}, } ], **zattrs, } ) shape0 = self._data[0].get_shape(squeeze) for level, series in enumerate(self._data): keyframe = series.keyframe keyframe.decode # cache decode function shape = series.get_shape(squeeze) dtype = series.dtype if fillvalue is None: self._fillvalue = fillvalue = keyframe.nodata if self._chunkmode: chunks = keyframe.shape else: chunks = keyframe.chunks self._store[f'{level}/.zattrs'] = ZarrStore._json( { '_ARRAY_DIMENSIONS': [ (f'{ax}{level}' if i != j else ax) for ax, i, j in zip( array_dimensions, shape, shape0 ) ] } ) self._store[f'{level}/.zarray'] = ZarrStore._json( { 'zarr_format': 2, 'shape': shape, 'chunks': ZarrTiffStore._chunks(chunks, shape), 'dtype': ZarrStore._dtype_str(dtype), 'compressor': None, 'fill_value': ZarrStore._value(fillvalue, dtype), 'order': 'C', 'filters': None, } ) if self._writable: self._writable = ZarrTiffStore._is_writable(keyframe) else: self._multiscales = False series = self._data[0] keyframe = series.keyframe keyframe.decode # cache decode function shape = series.get_shape(squeeze) dtype = series.dtype if fillvalue is None: self._fillvalue = fillvalue = keyframe.nodata if self._chunkmode: chunks = keyframe.shape else: chunks = keyframe.chunks if '_ARRAY_DIMENSIONS' not in zattrs: zattrs['_ARRAY_DIMENSIONS'] = list(series.get_axes(squeeze)) self._store['.zattrs'] = ZarrStore._json(zattrs) self._store['.zarray'] = ZarrStore._json( { 'zarr_format': 2, 'shape': shape, 'chunks': ZarrTiffStore._chunks(chunks, shape), 'dtype': ZarrStore._dtype_str(dtype), 'compressor': None, 'fill_value': ZarrStore._value(fillvalue, dtype), 'order': 'C', 'filters': None, } ) if self._writable: self._writable = ZarrTiffStore._is_writable(keyframe) def close(self) -> None: """Close open file handles.""" if hasattr(self, '_filecache'): self._filecache.clear() def write_fsspec( self, jsonfile: str | os.PathLike[Any] | TextIO, /, url: str, *, groupname: str | None = None, templatename: str | None = None, compressors: dict[COMPRESSION | int, str | None] | None = None, version: int | None = None, _shape: Sequence[int] | None = None, _axes: Sequence[str] | None = None, _index: Sequence[int] | None = None, _append: bool = False, _close: bool = True, ) -> None: """Write fsspec ReferenceFileSystem as JSON to file. Parameters: jsonfile: Name or open file handle of output JSON file. url: Remote location of TIFF file(s) without file name(s). groupname: Zarr 2 group name. templatename: Version 1 URL template name. The default is 'u'. compressors: Mapping of :py:class:`COMPRESSION` codes to Numcodecs codec names. version: Version of fsspec file to write. The default is 0. _shape: Shape of file sequence (experimental). _axes: Axes of file sequence (experimental). _index Index of file in sequence (experimental). _append: If *True*, only write index keys and values (experimental). _close: If *True*, no more appends (experimental). Raises: ValueError: ZarrTiffStore cannot be represented as ReferenceFileSystem due to features that are not supported by Zarr 2, Numcodecs, or Imagecodecs: - compressors, such as CCITT - filters, such as bitorder reversal, packed integers - dtypes, such as float24, complex integers - JPEGTables in multi-page series - incomplete chunks, such as `imagelength % rowsperstrip != 0` Files containing incomplete tiles may fail at runtime. Notes: Parameters `_shape`, `_axes`, `_index`, `_append`, and `_close` are an experimental API for joining the ReferenceFileSystems of multiple files of a TiffSequence. References: - `fsspec ReferenceFileSystem format `_ """ compressors = { 1: None, 8: 'zlib', 32946: 'zlib', 34925: 'lzma', 50013: 'zlib', # pixtiff 5: 'imagecodecs_lzw', 7: 'imagecodecs_jpeg', 22610: 'imagecodecs_jpegxr', 32773: 'imagecodecs_packbits', 33003: 'imagecodecs_jpeg2k', 33004: 'imagecodecs_jpeg2k', 33005: 'imagecodecs_jpeg2k', 33007: 'imagecodecs_jpeg', 34712: 'imagecodecs_jpeg2k', 34887: 'imagecodecs_lerc', 34892: 'imagecodecs_jpeg', 34933: 'imagecodecs_png', 34934: 'imagecodecs_jpegxr', 48124: 'imagecodecs_jetraw', 50000: 'imagecodecs_zstd', # numcodecs.zstd fails w/ unknown sizes 50001: 'imagecodecs_webp', 50002: 'imagecodecs_jpegxl', 52546: 'imagecodecs_jpegxl', **({} if compressors is None else compressors), } for series in self._data: errormsg = ' not supported by the fsspec ReferenceFileSystem' keyframe = series.keyframe if ( keyframe.compression in {65000, 65001, 65002} and keyframe.parent.is_eer ): compressors[keyframe.compression] = 'imagecodecs_eer' if keyframe.compression not in compressors: raise ValueError(f'{keyframe.compression!r} is' + errormsg) if keyframe.fillorder != 1: raise ValueError(f'{keyframe.fillorder!r} is' + errormsg) if keyframe.sampleformat not in {1, 2, 3, 6}: # TODO: support float24 and cint via filters? raise ValueError(f'{keyframe.sampleformat!r} is' + errormsg) if ( keyframe.bitspersample not in { 8, 16, 32, 64, 128, } and keyframe.compression not in { # JPEG 7, 33007, 34892, } and compressors[keyframe.compression] != 'imagecodecs_eer' ): raise ValueError( f'BitsPerSample {keyframe.bitspersample} is' + errormsg ) if ( not self._chunkmode and not keyframe.is_tiled and keyframe.imagelength % keyframe.rowsperstrip ): raise ValueError('incomplete chunks are' + errormsg) if self._chunkmode and not keyframe.is_final: raise ValueError(f'{self._chunkmode!r} is' + errormsg) if keyframe.jpegtables is not None and len(series.pages) > 1: raise ValueError( 'JPEGTables in multi-page files are' + errormsg ) if url is None: url = '' elif url and url[-1] != '/': url += '/' url = url.replace('\\', '/') if groupname is None: groupname = '' elif groupname and groupname[-1] != '/': groupname += '/' byteorder: ByteOrder | None = '<' if sys.byteorder == 'big' else '>' if ( self._data[0].keyframe.parent.byteorder != byteorder or self._data[0].keyframe.dtype is None or self._data[0].keyframe.dtype.itemsize == 1 ): byteorder = None index: str _shape = [] if _shape is None else list(_shape) _axes = [] if _axes is None else list(_axes) if len(_shape) != len(_axes): raise ValueError('len(_shape) != len(_axes)') if _index is None: index = '' elif len(_shape) != len(_index): raise ValueError('len(_shape) != len(_index)') elif _index: index = '.'.join(str(i) for i in _index) index += '.' refs: dict[str, Any] = {} refzarr: dict[str, Any] if version == 1: if _append: raise ValueError('cannot append to version 1') if templatename is None: templatename = 'u' refs['version'] = 1 refs['templates'] = {} refs['gen'] = [] templates = {} if self._data[0].is_multifile: i = 0 for page in self._data[0].pages: if page is None or page.keyframe is None: continue fname = page.keyframe.parent.filehandle.name if fname in templates: continue key = f'{templatename}{i}' templates[fname] = f'{{{{{key}}}}}' refs['templates'][key] = url + fname i += 1 else: fname = self._data[0].keyframe.parent.filehandle.name key = f'{templatename}' templates[fname] = f'{{{{{key}}}}}' refs['templates'][key] = url + fname refs['refs'] = refzarr = {} else: refzarr = refs if not _append: if groupname: # TODO: support nested groups refzarr['.zgroup'] = ZarrStore._json( {'zarr_format': 2} ).decode() for key, value in self._store.items(): if '.zattrs' in key and _axes: value = json.loads(value) if '_ARRAY_DIMENSIONS' in value: value['_ARRAY_DIMENSIONS'] = ( _axes + value['_ARRAY_DIMENSIONS'] ) value = ZarrStore._json(value) elif '.zarray' in key: level = int(key.split('/')[0]) if '/' in key else 0 keyframe = self._data[level].keyframe value = json.loads(value) if _shape: value['shape'] = _shape + value['shape'] value['chunks'] = [1] * len(_shape) + value['chunks'] codec_id = compressors[keyframe.compression] if codec_id == 'imagecodecs_jpeg': # TODO: handle JPEG color spaces jpegtables = keyframe.jpegtables if jpegtables is None: tables = None else: import base64 tables = base64.b64encode(jpegtables).decode() jpegheader = keyframe.jpegheader if jpegheader is None: header = None else: import base64 header = base64.b64encode(jpegheader).decode() ( colorspace_jpeg, colorspace_data, ) = jpeg_decode_colorspace( keyframe.photometric, keyframe.planarconfig, keyframe.extrasamples, keyframe.is_jfif, ) value['compressor'] = { 'id': codec_id, 'tables': tables, 'header': header, 'bitspersample': keyframe.bitspersample, 'colorspace_jpeg': colorspace_jpeg, 'colorspace_data': colorspace_data, } elif ( codec_id == 'imagecodecs_webp' and keyframe.samplesperpixel == 4 ): value['compressor'] = { 'id': codec_id, 'hasalpha': True, } elif codec_id == 'imagecodecs_eer': if keyframe.compression == 65002: rlebits = int(keyframe.tags.valueof(65007, 7)) horzbits = int(keyframe.tags.valueof(65008, 2)) vertbits = int(keyframe.tags.valueof(65009, 2)) elif keyframe.compression == 65001: rlebits = 7 horzbits = 2 vertbits = 2 else: rlebits = 8 horzbits = 2 vertbits = 2 value['compressor'] = { 'id': codec_id, 'shape': keyframe.chunks, 'rlebits': rlebits, 'horzbits': horzbits, 'vertbits': vertbits, } elif codec_id is not None: value['compressor'] = {'id': codec_id} if byteorder is not None: value['dtype'] = byteorder + value['dtype'][1:] if keyframe.predictor > 1: # predictors need access to chunk shape and dtype # requires imagecodecs > 2021.8.26 to read if keyframe.predictor in {2, 34892, 34893}: filter_id = 'imagecodecs_delta' else: filter_id = 'imagecodecs_floatpred' if keyframe.predictor <= 3: dist = 1 elif keyframe.predictor in {34892, 34894}: dist = 2 else: dist = 4 if ( keyframe.planarconfig == 1 and keyframe.samplesperpixel > 1 ): axis = -2 else: axis = -1 value['filters'] = [ { 'id': filter_id, 'axis': axis, 'dist': dist, 'shape': value['chunks'], 'dtype': value['dtype'], } ] value = ZarrStore._json(value) refzarr[groupname + key] = value.decode() fh: TextIO if hasattr(jsonfile, 'write'): fh = jsonfile # type: ignore[assignment] else: fh = open(jsonfile, 'w', encoding='utf-8') if version == 1: fh.write(json.dumps(refs, indent=1).rsplit('}"', 1)[0] + '}"') indent = ' ' elif _append: indent = ' ' else: fh.write(json.dumps(refs, indent=1)[:-2]) indent = ' ' for key, value in self._store.items(): if '.zarray' in key: value = json.loads(value) shape = value['shape'] chunks = value['chunks'] levelstr = (key.split('/')[0] + '/') if '/' in key else '' for chunkindex in ZarrStore._ndindex(shape, chunks): key = levelstr + chunkindex keyframe, page, _, offset, bytecount = self._parse_key(key) key = levelstr + index + chunkindex if page and self._chunkmode and offset is None: offset = page.dataoffsets[0] bytecount = keyframe.nbytes if offset and bytecount: fname = keyframe.parent.filehandle.name if version == 1: fname = templates[fname] else: fname = f'{url}{fname}' fh.write( f',\n{indent}"{groupname}{key}": ' f'["{fname}", {offset}, {bytecount}]' ) # TODO: support nested groups if version == 1: fh.write('\n }\n}') elif _close: fh.write('\n}') if not hasattr(jsonfile, 'write'): fh.close() def _contains(self, key: str, /) -> bool: """Return if key is in store.""" try: _, page, _, offset, bytecount = self._parse_key(key) except (KeyError, IndexError): return False if self._chunkmode and offset is None: return True return ( page is not None and offset is not None and bytecount is not None and offset > 0 and bytecount > 0 ) def _getitem(self, key: str, /) -> NDArray[Any]: """Return chunk from file.""" keyframe, page, chunkindex, offset, bytecount = self._parse_key(key) if page is None or offset == 0 or bytecount == 0: raise KeyError(key) fh = page.parent.filehandle if self._chunkmode: if offset is not None: # contiguous image data in page or series # create virtual frame instead of loading page from file assert bytecount is not None page = TiffFrame( page.parent, index=0, keyframe=keyframe, dataoffsets=(offset,), databytecounts=(bytecount,), ) self._filecache.open(fh) chunk = page.asarray( lock=self._filecache.lock, maxworkers=self._maxworkers, buffersize=self._buffersize, ) self._filecache.close(fh) if self._transform is not None: chunk = self._transform(chunk) return chunk assert offset is not None and bytecount is not None chunk_bytes = self._filecache.read(fh, offset, bytecount) decodeargs: dict[str, Any] = {'_fullsize': True} if page.jpegtables is not None: decodeargs['jpegtables'] = page.jpegtables if keyframe.jpegheader is not None: decodeargs['jpegheader'] = keyframe.jpegheader assert chunkindex is not None chunk = keyframe.decode( chunk_bytes, chunkindex, **decodeargs # type: ignore[assignment] )[0] assert chunk is not None if self._transform is not None: chunk = self._transform(chunk) if self._chunkmode: chunks = keyframe.shape else: chunks = keyframe.chunks if chunk.size != product(chunks): raise RuntimeError(f'{chunk.size} != {product(chunks)}') return chunk # .tobytes() def _setitem(self, key: str, value: bytes, /) -> None: """Write chunk to file.""" if not self._writable: raise PermissionError('ZarrStore is read-only') keyframe, page, chunkindex, offset, bytecount = self._parse_key(key) if ( page is None or offset is None or offset == 0 or bytecount is None or bytecount == 0 ): return if bytecount < len(value): value = value[:bytecount] self._filecache.write(page.parent.filehandle, offset, value) def _parse_key(self, key: str, /) -> tuple[ TiffPage, TiffPage | TiffFrame | None, int | None, int | None, int | None, ]: """Return keyframe, page, index, offset, and bytecount from key. Raise KeyError if key is not valid. """ if self._multiscales: try: level, key = key.split('/') series = self._data[int(level)] except (ValueError, IndexError) as exc: raise KeyError(key) from exc else: series = self._data[0] keyframe = series.keyframe pageindex, chunkindex = self._indices(key, series) if series.dataoffset is not None: # contiguous or truncated page = series[0] if page is None or page.dtype is None or page.keyframe is None: return keyframe, None, chunkindex, 0, 0 offset = pageindex * page.size * page.dtype.itemsize try: offset += page.dataoffsets[chunkindex] except IndexError as exc: raise KeyError(key) from exc if self._chunkmode: bytecount = page.size * page.dtype.itemsize return page.keyframe, page, chunkindex, offset, bytecount elif self._chunkmode: with self._filecache.lock: page = series[pageindex] if page is None or page.keyframe is None: return keyframe, None, None, 0, 0 return page.keyframe, page, None, None, None else: with self._filecache.lock: page = series[pageindex] if page is None or page.keyframe is None: return keyframe, None, chunkindex, 0, 0 try: offset = page.dataoffsets[chunkindex] except IndexError: # raise KeyError(key) from exc # issue #249: Philips may be missing last row of tiles return page.keyframe, page, chunkindex, 0, 0 try: bytecount = page.databytecounts[chunkindex] except IndexError as exc: raise KeyError(key) from exc return page.keyframe, page, chunkindex, offset, bytecount def _indices(self, key: str, series: TiffPageSeries, /) -> tuple[int, int]: """Return page and strile indices from Zarr chunk index.""" keyframe = series.keyframe shape = series.get_shape(self._squeeze) try: indices = [int(i) for i in key.split('.')] except ValueError as exc: raise KeyError(key) from exc assert len(indices) == len(shape) if self._chunkmode: chunked = (1,) * len(keyframe.shape) else: chunked = keyframe.chunked p = 1 for i, s in enumerate(shape[::-1]): p *= s if p == keyframe.size: i = len(indices) - i - 1 frames_indices = indices[:i] strile_indices = indices[i:] frames_chunked = shape[:i] strile_chunked = list(shape[i:]) # updated later break else: raise RuntimeError if len(strile_chunked) == len(keyframe.shape): strile_chunked = list(chunked) else: # get strile_chunked including singleton dimensions i = len(strile_indices) - 1 j = len(keyframe.shape) - 1 while True: if strile_chunked[i] == keyframe.shape[j]: strile_chunked[i] = chunked[j] i -= 1 j -= 1 elif strile_chunked[i] == 1: i -= 1 else: raise RuntimeError('shape does not match page shape') if i < 0 or j < 0: break assert product(strile_chunked) == product(chunked) if len(frames_indices) > 0: frameindex = int( numpy.ravel_multi_index(frames_indices, frames_chunked) ) else: frameindex = 0 if len(strile_indices) > 0: strileindex = int( numpy.ravel_multi_index(strile_indices, strile_chunked) ) else: strileindex = 0 return frameindex, strileindex @staticmethod def _chunks( chunks: tuple[int, ...], shape: tuple[int, ...], / ) -> tuple[int, ...]: """Return chunks with same length as shape.""" ndim = len(shape) if ndim == 0: return () # empty array if 0 in shape: return (1,) * ndim newchunks = [] i = ndim - 1 j = len(chunks) - 1 while True: if j < 0: newchunks.append(1) i -= 1 elif shape[i] > 1 and chunks[j] > 1: newchunks.append(chunks[j]) i -= 1 j -= 1 elif shape[i] == chunks[j]: # both 1 newchunks.append(1) i -= 1 j -= 1 elif shape[i] == 1: newchunks.append(1) i -= 1 elif chunks[j] == 1: newchunks.append(1) j -= 1 else: raise RuntimeError if i < 0 or ndim == len(newchunks): break # assert ndim == len(newchunks) return tuple(newchunks[::-1]) @staticmethod def _is_writable(keyframe: TiffPage) -> bool: """Return True if chunks are writable.""" return ( keyframe.compression == 1 and keyframe.fillorder == 1 and keyframe.sampleformat in {1, 2, 3, 6} and keyframe.bitspersample in {8, 16, 32, 64, 128} # and ( # keyframe.rowsperstrip == 0 # or keyframe.imagelength % keyframe.rowsperstrip == 0 # ) ) def __enter__(self) -> ZarrTiffStore: return self def __repr__(self) -> str: return f'' @final class ZarrFileSequenceStore(ZarrStore): """Zarr 2 store interface to image array in FileSequence. Parameters: filesequence: FileSequence instance to wrap as Zarr 2 store. Files in containers are not supported. fillvalue: Value to use for missing chunks. The default is 0. chunkmode: Currently only one chunk per file is supported. chunkshape: Shape of chunk in each file. Must match ``FileSequence.imread(file, **imreadargs).shape``. chunkdtype: Data type of chunk in each file. Must match ``FileSequence.imread(file, **imreadargs).dtype``. axestiled: Axes to be tiled. Map stacked sequence axis to chunk axis. zattrs: Additional attributes to store in `.zattrs`. imreadargs: Arguments passed to :py:attr:`FileSequence.imread`. **kwargs: Arguments passed to :py:attr:`FileSequence.imread`in addition to `imreadargs`. Notes: If `chunkshape` or `chunkdtype` are *None* (default), their values are determined by reading the first file with ``FileSequence.imread(arg.files[0], **imreadargs)``. """ imread: Callable[..., NDArray[Any]] """Function to read image array from single file.""" _lookup: dict[tuple[int, ...], str] _chunks: tuple[int, ...] _dtype: numpy.dtype[Any] _tiled: TiledSequence _commonpath: str _kwargs: dict[str, Any] def __init__( self, filesequence: FileSequence, /, *, fillvalue: int | float | None = None, chunkmode: CHUNKMODE | int | str | None = None, chunkshape: Sequence[int] | None = None, chunkdtype: DTypeLike | None = None, axestiled: dict[int, int] | Sequence[tuple[int, int]] | None = None, zattrs: dict[str, Any] | None = None, imreadargs: dict[str, Any] | None = None, **kwargs: Any, ) -> None: super().__init__(fillvalue=fillvalue, chunkmode=chunkmode) if self._chunkmode not in {0, 3}: raise ValueError(f'invalid chunkmode {self._chunkmode!r}') if not isinstance(filesequence, FileSequence): raise TypeError('not a FileSequence') if filesequence._container: raise NotImplementedError('cannot open container as Zarr 2 store') # TODO: deprecate kwargs? if imreadargs is not None: kwargs |= imreadargs self._kwargs = kwargs self._imread = filesequence.imread self._commonpath = filesequence.commonpath() if chunkshape is None or chunkdtype is None: chunk = filesequence.imread(filesequence[0], **kwargs) self._chunks = chunk.shape self._dtype = chunk.dtype else: self._chunks = tuple(chunkshape) self._dtype = numpy.dtype(chunkdtype) chunk = None self._tiled = TiledSequence( filesequence.shape, self._chunks, axestiled=axestiled ) self._lookup = dict( zip(self._tiled.indices(filesequence.indices), filesequence) ) zattrs = {} if zattrs is None else dict(zattrs) # TODO: add _ARRAY_DIMENSIONS to ZarrFileSequenceStore # if '_ARRAY_DIMENSIONS' not in zattrs: # zattrs['_ARRAY_DIMENSIONS'] = list(...) self._store['.zattrs'] = ZarrStore._json(zattrs) self._store['.zarray'] = ZarrStore._json( { 'zarr_format': 2, 'shape': self._tiled.shape, 'chunks': self._tiled.chunks, 'dtype': ZarrStore._dtype_str(self._dtype), 'compressor': None, 'fill_value': ZarrStore._value(fillvalue, self._dtype), 'order': 'C', 'filters': None, } ) def _contains(self, key: str, /) -> bool: """Return if key is in store.""" try: indices = tuple(int(i) for i in key.split('.')) except Exception: return False return indices in self._lookup def _getitem(self, key: str, /) -> NDArray[Any]: """Return chunk from file.""" indices = tuple(int(i) for i in key.split('.')) filename = self._lookup.get(indices, None) if filename is None: raise KeyError(key) return self._imread(filename, **self._kwargs) def _setitem(self, key: str, value: bytes, /) -> None: raise PermissionError('ZarrStore is read-only') def write_fsspec( self, jsonfile: str | os.PathLike[Any] | TextIO, /, url: str, *, quote: bool | None = None, groupname: str | None = None, templatename: str | None = None, codec_id: str | None = None, version: int | None = None, _append: bool = False, _close: bool = True, ) -> None: """Write fsspec ReferenceFileSystem as JSON to file. Parameters: jsonfile: Name or open file handle of output JSON file. url: Remote location of TIFF file(s) without file name(s). quote: Quote file names, that is, replace ' ' with '%20'. The default is True. groupname: Zarr 2 group name. templatename: Version 1 URL template name. The default is 'u'. codec_id: Name of Numcodecs codec to decode files or chunks. version: Version of fsspec file to write. The default is 0. _append, _close: Experimental API. References: - `fsspec ReferenceFileSystem format `_ """ from urllib.parse import quote as quote_ kwargs = self._kwargs.copy() if codec_id is not None: pass elif self._imread is imread: codec_id = 'tifffile' elif 'imagecodecs' in self._imread.__module__: if ( self._imread.__name__ != 'imread' or 'codec' not in self._kwargs ): raise ValueError('cannot determine codec_id') codec = kwargs.pop('codec') if isinstance(codec, (list, tuple)): codec = codec[0] if callable(codec): codec = codec.__name__.split('_')[0] codec_id = { 'apng': 'imagecodecs_apng', 'avif': 'imagecodecs_avif', 'gif': 'imagecodecs_gif', 'heif': 'imagecodecs_heif', 'jpeg': 'imagecodecs_jpeg', 'jpeg8': 'imagecodecs_jpeg', 'jpeg12': 'imagecodecs_jpeg', 'jpeg2k': 'imagecodecs_jpeg2k', 'jpegls': 'imagecodecs_jpegls', 'jpegxl': 'imagecodecs_jpegxl', 'jpegxr': 'imagecodecs_jpegxr', 'ljpeg': 'imagecodecs_ljpeg', 'lerc': 'imagecodecs_lerc', # 'npy': 'imagecodecs_npy', 'png': 'imagecodecs_png', 'qoi': 'imagecodecs_qoi', 'tiff': 'imagecodecs_tiff', 'webp': 'imagecodecs_webp', 'zfp': 'imagecodecs_zfp', }[codec] else: # TODO: choose codec from filename raise ValueError('cannot determine codec_id') if url is None: url = '' elif url and url[-1] != '/': url += '/' if groupname is None: groupname = '' elif groupname and groupname[-1] != '/': groupname += '/' refs: dict[str, Any] = {} if version == 1: if _append: raise ValueError('cannot append to version 1 files') if templatename is None: templatename = 'u' refs['version'] = 1 refs['templates'] = {templatename: url} refs['gen'] = [] refs['refs'] = refzarr = {} url = f'{{{{{templatename}}}}}' else: refzarr = refs if groupname and not _append: refzarr['.zgroup'] = ZarrStore._json({'zarr_format': 2}).decode() for key, value in self._store.items(): if '.zarray' in key: value = json.loads(value) # TODO: make kwargs serializable value['compressor'] = {'id': codec_id, **kwargs} value = ZarrStore._json(value) refzarr[groupname + key] = value.decode() fh: TextIO if hasattr(jsonfile, 'write'): fh = jsonfile # type: ignore[assignment] else: fh = open(jsonfile, 'w', encoding='utf-8') if version == 1: fh.write(json.dumps(refs, indent=1).rsplit('}"', 1)[0] + '}"') indent = ' ' elif _append: fh.write(',\n') fh.write(json.dumps(refs, indent=1)[2:-2]) indent = ' ' else: fh.write(json.dumps(refs, indent=1)[:-2]) indent = ' ' prefix = len(self._commonpath) for key, value in self._store.items(): if '.zarray' in key: value = json.loads(value) for index, filename in sorted( self._lookup.items(), key=lambda x: x[0] ): filename = filename[prefix:].replace('\\', '/') if quote is None or quote: filename = quote_(filename) if filename[0] == '/': filename = filename[1:] indexstr = '.'.join(str(i) for i in index) fh.write( f',\n{indent}"{groupname}{indexstr}": ' f'["{url}{filename}"]' ) if version == 1: fh.write('\n }\n}') elif _close: fh.write('\n}') if not hasattr(jsonfile, 'write'): fh.close() def __enter__(self) -> ZarrFileSequenceStore: return self def __repr__(self) -> str: return f'' def __str__(self) -> str: return '\n '.join( ( self.__class__.__name__, 'shape: {}'.format( ', '.join(str(i) for i in self._tiled.shape) ), 'chunks: {}'.format( ', '.join(str(i) for i in self._tiled.chunks) ), f'dtype: {self._dtype}', f'fillvalue: {self._fillvalue}', ) ) class FileSequence(Sequence[str]): r"""Sequence of files containing compatible array data. Parameters: imread: Function to read image array from single file. files: Glob filename pattern or sequence of file names. If *None*, use '\*'. All files must contain array data of same shape and dtype. Binary streams are not supported. container: Name or open instance of ZIP file in which files are stored. sort: Function to sort file names if `files` is a pattern. The default is :py:func:`natural_sorted`. If *False*, disable sorting. parse: Function to parse sequence of sorted file names to dims, shape, chunk indices, and filtered file names. The default is :py:func:`parse_filenames` if `kwargs` contains `'pattern'`. **kwargs: Additional arguments passed to `parse` function. Examples: >>> filenames = ['temp_C001T002.tif', 'temp_C001T001.tif'] >>> ims = TiffSequence(filenames, pattern=r'_(C)(\d+)(T)(\d+)') >>> ims[0] 'temp_C001T002.tif' >>> ims.shape (1, 2) >>> ims.axes 'CT' """ imread: Callable[..., NDArray[Any]] """Function to read image array from single file.""" shape: tuple[int, ...] """Shape of file series. Excludes shape of chunks in files.""" axes: str """Character codes for dimensions in shape.""" dims: tuple[str, ...] """Names of dimensions in shape.""" indices: tuple[tuple[int, ...]] """Indices of files in shape.""" _files: list[str] # list of file names _container: Any # TODO: container type? def __init__( self, imread: Callable[..., NDArray[Any]], files: ( str | os.PathLike[Any] | Sequence[str | os.PathLike[Any]] | None ), *, container: str | os.PathLike[Any] | None = None, sort: Callable[..., Any] | bool | None = None, parse: Callable[..., Any] | None = None, **kwargs: Any, ) -> None: sort_func: Callable[..., list[str]] | None = None if files is None: files = '*' if sort is None: sort_func = natural_sorted elif callable(sort): sort_func = sort elif sort: sort_func = natural_sorted # elif not sort: # sort_func = None self._container = container if container is not None: import fnmatch if isinstance(container, (str, os.PathLike)): import zipfile self._container = zipfile.ZipFile(container) elif not hasattr(self._container, 'open'): raise ValueError('invalid container') if isinstance(files, str): files = fnmatch.filter(self._container.namelist(), files) if sort_func is not None: files = sort_func(files) elif isinstance(files, os.PathLike): files = [os.fspath(files)] if sort is not None and sort_func is not None: files = sort_func(files) elif isinstance(files, str): files = glob.glob(files) if sort_func is not None: files = sort_func(files) files = [os.fspath(f) for f in files] # type: ignore[union-attr] if not files: raise ValueError('no files found') if not callable(imread): raise ValueError('invalid imread function') if container: # redefine imread to read from container def imread_( fname: str, _imread: Any = imread, **kwargs: Any ) -> NDArray[Any]: with self._container.open(fname) as handle1: with io.BytesIO(handle1.read()) as handle2: return _imread(handle2, **kwargs) imread = imread_ if parse is None and kwargs.get('pattern', None): parse = parse_filenames if parse: try: dims, shape, indices, files = parse(files, **kwargs) except ValueError as exc: raise ValueError('failed to parse file names') from exc else: dims = ('sequence',) shape = (len(files),) indices = tuple((i,) for i in range(len(files))) assert isinstance(files, list) and isinstance(files[0], str) codes = TIFF.AXES_CODES axes = ''.join(codes.get(dim.lower(), dim[0].upper()) for dim in dims) self._files = files self.imread = imread self.axes = axes self.dims = tuple(dims) self.shape = tuple(shape) self.indices = indices def asarray( self, *, imreadargs: dict[str, Any] | None = None, chunkshape: tuple[int, ...] | None = None, chunkdtype: DTypeLike | None = None, axestiled: dict[int, int] | Sequence[tuple[int, int]] | None = None, out_inplace: bool | None = None, ioworkers: int | None = 1, out: OutputType = None, **kwargs: Any, ) -> NDArray[Any]: """Return images from files as NumPy array. Parameters: imreadargs: Arguments passed to :py:attr:`FileSequence.imread`. chunkshape: Shape of chunk in each file. Must match ``FileSequence.imread(file, **imreadargs).shape``. By default, this is determined by reading the first file. chunkdtype: Data type of chunk in each file. Must match ``FileSequence.imread(file, **imreadargs).dtype``. By default, this is determined by reading the first file. axestiled: Axes to be tiled. Map stacked sequence axis to chunk axis. ioworkers: Maximum number of threads to execute :py:attr:`FileSequence.imread` asynchronously. If *0*, use up to :py:attr:`_TIFF.MAXIOWORKERS` threads. Using threads can significantly improve runtime when reading many small files from a network share. out_inplace: :py:attr:`FileSequence.imread` decodes directly to the output instead of returning an array, which is copied to the output. Not all imread functions support this, especially in non-contiguous cases. out: Specifies how image array is returned. By default, create a new array. If a *numpy.ndarray*, a writable array to which the images are copied. If *'memmap'*, create a memory-mapped array in a temporary file. If a *string* or *open file*, the file used to create a memory-mapped array. **kwargs: Arguments passed to :py:attr:`FileSequence.imread` in addition to `imreadargs`. Raises: IndexError, ValueError: Array shapes do not match. """ # TODO: deprecate kwargs? files = self._files if imreadargs is not None: kwargs |= imreadargs if ioworkers is None or ioworkers < 1: ioworkers = TIFF.MAXIOWORKERS ioworkers = min(len(files), ioworkers) assert isinstance(ioworkers, int) # mypy bug? if out_inplace is None and self.imread == imread: out_inplace = True else: out_inplace = bool(out_inplace) if chunkshape is None or chunkdtype is None: im = self.imread(files[0], **kwargs) chunkshape = im.shape chunkdtype = im.dtype del im chunkdtype = numpy.dtype(chunkdtype) assert chunkshape is not None if axestiled: tiled = TiledSequence(self.shape, chunkshape, axestiled=axestiled) result = create_output(out, tiled.shape, chunkdtype) def func(index: tuple[int | slice, ...], fname: str) -> None: # read single image from file into result # if index is None: # return if out_inplace: self.imread(fname, out=result[index], **kwargs) else: im = self.imread(fname, **kwargs) result[index] = im del im # delete memory-mapped file if ioworkers < 2: for index, fname in zip(tiled.slices(self.indices), files): func(index, fname) else: with ThreadPoolExecutor(ioworkers) as executor: for _ in executor.map( func, tiled.slices(self.indices), files ): pass else: shape = self.shape + chunkshape result = create_output(out, shape, chunkdtype) result = result.reshape(-1, *chunkshape) def func(index: tuple[int | slice, ...], fname: str) -> None: # read single image from file into result if index is None: return index_ = int( numpy.ravel_multi_index( index, # type: ignore[arg-type] self.shape, ) ) if out_inplace: self.imread(fname, out=result[index_], **kwargs) else: im = self.imread(fname, **kwargs) result[index_] = im del im # delete memory-mapped file if ioworkers < 2: for index, fname in zip(self.indices, files): func(index, fname) else: with ThreadPoolExecutor(ioworkers) as executor: for _ in executor.map(func, self.indices, files): pass result.shape = shape return result def aszarr(self, **kwargs: Any) -> ZarrFileSequenceStore: """Return images from files as Zarr 2 store. Parameters: **kwargs: Arguments passed to :py:class:`ZarrFileSequenceStore`. """ return ZarrFileSequenceStore(self, **kwargs) def close(self) -> None: """Close open files.""" if self._container is not None: self._container.close() self._container = None def commonpath(self) -> str: """Return longest common sub-path of each file in sequence.""" if len(self._files) == 1: commonpath = os.path.dirname(self._files[0]) else: commonpath = os.path.commonpath(self._files) return commonpath @property def files(self) -> list[str]: """Deprecated. Use the FileSequence sequence interface. :meta private: """ warnings.warn( ' is deprecated since 2024.5.22. ' 'Use the FileSequence sequence interface.', DeprecationWarning, stacklevel=2, ) return self._files @property def files_missing(self) -> int: """Number of empty chunks.""" return product(self.shape) - len(self._files) def __iter__(self) -> Iterator[str]: """Return iterator over all file names.""" return iter(self._files) def __len__(self) -> int: return len(self._files) @overload def __getitem__(self, key: int, /) -> str: ... @overload def __getitem__(self, key: slice, /) -> list[str]: ... def __getitem__(self, key: int | slice, /) -> str | list[str]: return self._files[key] def __enter__(self) -> FileSequence: return self def __exit__(self, exc_type: Any, exc_value: Any, traceback: Any) -> None: self.close() def __repr__(self) -> str: return f'' def __str__(self) -> str: file = str(self._container) if self._container else self._files[0] file = os.path.split(file)[-1] return '\n '.join( ( self.__class__.__name__, file, f'files: {len(self._files)} ({self.files_missing} missing)', 'shape: {}'.format(', '.join(str(i) for i in self.shape)), 'dims: {}'.format(', '.join(s for s in self.dims)), # f'axes: {self.axes}', ) ) @final class TiffSequence(FileSequence): r"""Sequence of TIFF files containing compatible array data. Same as :py:class:`FileSequence` with the :py:func:`imread` function, `'\*.tif'` glob pattern, and `out_inplace` enabled by default. """ def __init__( self, files: ( str | os.PathLike[Any] | Sequence[str | os.PathLike[Any]] | None ) = None, *, imread: Callable[..., NDArray[Any]] = imread, **kwargs: Any, ) -> None: super().__init__(imread, '*.tif' if files is None else files, **kwargs) def __repr__(self) -> str: return f'' @final class TiledSequence: """Tiled sequence of chunks. Transform a sequence of stacked chunks to tiled chunks. Parameters: stackshape: Shape of stacked sequence excluding chunks. chunkshape: Shape of chunks. axestiled: Axes to be tiled. Map stacked sequence axis to chunk axis. By default, the sequence is not tiled. axes: Character codes for dimensions in stackshape and chunkshape. Examples: >>> ts = TiledSequence((1, 2), (3, 4), axestiled={1: 0}, axes='ABYX') >>> ts.shape (1, 6, 4) >>> ts.chunks (1, 3, 4) >>> ts.axes 'AYX' """ chunks: tuple[int, ...] """Shape of chunks in tiled sequence.""" # with same number of dimensions as shape shape: tuple[int, ...] """Shape of tiled sequence including chunks.""" axes: str | tuple[str, ...] | None """Dimensions codes of tiled sequence.""" shape_squeezed: tuple[int, ...] """Shape of tiled sequence with length-1 dimensions removed.""" axes_squeezed: str | tuple[str, ...] | None """Dimensions codes of tiled sequence with length-1 dimensions removed.""" _stackdims: int """Number of dimensions in stack excluding chunks.""" _chunkdims: int """Number of dimensions in chunks.""" _shape_untiled: tuple[int, ...] """Shape of untiled sequence (stackshape + chunkshape).""" _axestiled: tuple[tuple[int, int], ...] """Map axes to tile from stack to chunks.""" def __init__( self, stackshape: Sequence[int], chunkshape: Sequence[int], /, *, axestiled: dict[int, int] | Sequence[tuple[int, int]] | None = None, axes: str | Sequence[str] | None = None, ) -> None: self._stackdims = len(stackshape) self._chunkdims = len(chunkshape) self._shape_untiled = tuple(stackshape) + tuple(chunkshape) if axes is not None and len(axes) != len(self._shape_untiled): raise ValueError( 'axes length does not match stackshape + chunkshape' ) if axestiled: axestiled = dict(axestiled) for ax0, ax1 in axestiled.items(): axestiled[ax0] = ax1 + self._stackdims self._axestiled = tuple(reversed(sorted(axestiled.items()))) axes_list = [] if axes is None else list(axes) shape = list(self._shape_untiled) chunks = [1] * self._stackdims + list(chunkshape) used = set() for ax0, ax1 in self._axestiled: if ax0 in used or ax1 in used: raise ValueError('duplicate axis') used.add(ax0) used.add(ax1) shape[ax1] *= stackshape[ax0] for ax0, ax1 in self._axestiled: del shape[ax0] del chunks[ax0] if axes_list: del axes_list[ax0] self.shape = tuple(shape) self.chunks = tuple(chunks) if axes is None: self.axes = None elif isinstance(axes, str): self.axes = ''.join(axes_list) else: self.axes = tuple(axes_list) else: self._axestiled = () self.shape = self._shape_untiled self.chunks = (1,) * self._stackdims + tuple(chunkshape) if axes is None: self.axes = None elif isinstance(axes, str): self.axes = axes else: self.axes = tuple(axes) assert len(self.shape) == len(self.chunks) if self.axes is not None: assert len(self.shape) == len(self.axes) if self.axes is None: self.shape_squeezed = tuple(i for i in self.shape if i > 1) self.axes_squeezed = None else: keep = ('X', 'Y', 'width', 'length', 'height') self.shape_squeezed = tuple( i for i, ax in zip(self.shape, self.axes) if i > 1 or ax in keep ) squeezed = tuple( ax for i, ax in zip(self.shape, self.axes) if i > 1 or ax in keep ) self.axes_squeezed = ( ''.join(squeezed) if isinstance(self.axes, str) else squeezed ) def indices( self, indices: Iterable[Sequence[int]], / ) -> Iterator[tuple[int, ...]]: """Return iterator over chunk indices of tiled sequence. Parameters: indices: Indices of chunks in stacked sequence. """ chunkindex = [0] * self._chunkdims for index in indices: if index is None: yield None else: if len(index) != self._stackdims: raise ValueError(f'{len(index)} != {self._stackdims}') index = list(index) + chunkindex for ax0, ax1 in self._axestiled: index[ax1] = index[ax0] for ax0, ax1 in self._axestiled: del index[ax0] yield tuple(index) def slices( self, indices: Iterable[Sequence[int]] | None = None, / ) -> Iterator[tuple[int | slice, ...]]: """Return iterator over slices of chunks in tiled sequence. Parameters: indices: Indices of chunks in stacked sequence. """ wholeslice: list[int | slice] chunkslice: list[int | slice] = [slice(None)] * self._chunkdims if indices is None: indices = numpy.ndindex(self._shape_untiled[: self._stackdims]) for index in indices: if index is None: yield None else: assert len(index) == self._stackdims wholeslice = [*index, *chunkslice] for ax0, ax1 in self._axestiled: j = self._shape_untiled[ax1] i = cast(int, wholeslice[ax0]) * j wholeslice[ax1] = slice(i, i + j) for ax0, ax1 in self._axestiled: del wholeslice[ax0] yield tuple(wholeslice) @property def ndim(self) -> int: """Number of dimensions of tiled sequence excluding chunks.""" return len(self.shape) @property def is_tiled(self) -> bool: """Sequence is tiled.""" return bool(self._axestiled) @final class FileHandle: """Binary file handle. A limited, special purpose binary file handle that can: - handle embedded files (for example, LSM within LSM files). - re-open closed files (for multi-file formats, such as OME-TIFF). - read and write NumPy arrays and records from file-like objects. When initialized from another file handle, do not use the other handle unless this FileHandle is closed. FileHandle instances are not thread-safe. Parameters: file: File name or seekable binary stream, such as open file, BytesIO, or fsspec OpenFile. mode: File open mode if `file` is file name. The default is 'rb'. Files are always opened in binary mode. name: Name of file if `file` is binary stream. offset: Start position of embedded file. The default is the current file position. size: Size of embedded file. The default is the number of bytes from `offset` to the end of the file. """ # TODO: make FileHandle a subclass of IO[bytes] __slots__ = ( '_fh', '_file', '_mode', '_name', '_dir', '_lock', '_offset', '_size', '_close', ) _file: str | os.PathLike[Any] | FileHandle | IO[bytes] | None _fh: IO[bytes] | None _mode: str _name: str _dir: str _offset: int _size: int _close: bool _lock: threading.RLock | NullContext def __init__( self, file: str | os.PathLike[Any] | FileHandle | IO[bytes], /, mode: ( Literal['r', 'r+', 'w', 'x', 'rb', 'r+b', 'wb', 'xb'] | None ) = None, *, name: str | None = None, offset: int | None = None, size: int | None = None, ) -> None: self._mode = 'rb' if mode is None else mode self._fh = None self._file = file # reference to original argument for re-opening self._name = name if name else '' self._dir = '' self._offset = -1 if offset is None else offset self._size = -1 if size is None else size self._close = True self._lock = NullContext() self.open() assert self._fh is not None def open(self) -> None: """Open or re-open file.""" if self._fh is not None: return # file is open if isinstance(self._file, os.PathLike): self._file = os.fspath(self._file) if isinstance(self._file, str): # file name if self._mode[-1:] != 'b': self._mode += 'b' if self._mode not in {'rb', 'r+b', 'wb', 'xb'}: raise ValueError(f'invalid mode {self._mode}') self._file = os.path.realpath(self._file) self._dir, self._name = os.path.split(self._file) self._fh = open(self._file, self._mode, encoding=None) self._close = True self._offset = max(0, self._offset) elif isinstance(self._file, FileHandle): # FileHandle self._fh = self._file._fh self._offset = max(0, self._offset) self._offset += self._file._offset self._close = False if not self._name: if self._offset: name, ext = os.path.splitext(self._file._name) self._name = f'{name}@{self._offset}{ext}' else: self._name = self._file._name self._mode = self._file._mode self._dir = self._file._dir elif hasattr(self._file, 'seek'): # binary stream: open file, BytesIO, fsspec LocalFileOpener # cast to IO[bytes] even it might not be self._fh = cast(IO[bytes], self._file) try: self._fh.tell() except Exception as exc: raise ValueError('binary stream is not seekable') from exc if self._offset < 0: self._offset = self._fh.tell() self._close = False if not self._name: try: self._dir, self._name = os.path.split(self._fh.name) except AttributeError: try: self._dir, self._name = os.path.split( self._fh.path # type: ignore[attr-defined] ) except AttributeError: self._name = 'Unnamed binary stream' try: self._mode = self._fh.mode except AttributeError: pass elif hasattr(self._file, 'open'): # fsspec OpenFile _file: Any = self._file self._fh = cast(IO[bytes], _file.open()) try: self._fh.tell() except Exception as exc: try: self._fh.close() except Exception: pass raise ValueError('OpenFile is not seekable') from exc if self._offset < 0: self._offset = self._fh.tell() self._close = True if not self._name: try: self._dir, self._name = os.path.split(_file.path) except AttributeError: self._name = 'Unnamed binary stream' try: self._mode = _file.mode except AttributeError: pass else: raise ValueError( 'the first parameter must be a file name ' 'or seekable binary file object, ' f'not {type(self._file)!r}' ) assert self._fh is not None if self._offset: self._fh.seek(self._offset) if self._size < 0: pos = self._fh.tell() self._fh.seek(self._offset, os.SEEK_END) self._size = self._fh.tell() self._fh.seek(pos) def close(self) -> None: """Close file handle.""" if self._close and self._fh is not None: try: self._fh.close() except Exception: # PermissionError on MacOS. See issue #184 pass self._fh = None def fileno(self) -> int: """Return underlying file descriptor if exists, else raise OSError.""" assert self._fh is not None try: return self._fh.fileno() except (OSError, AttributeError) as exc: raise OSError( f'{type(self._fh)} does not have a file descriptor' ) from exc def writable(self) -> bool: """Return True if stream supports writing.""" assert self._fh is not None if hasattr(self._fh, 'writable'): return self._fh.writable() return False def seekable(self) -> bool: """Return True if stream supports random access.""" return True def tell(self) -> int: """Return file's current position.""" assert self._fh is not None return self._fh.tell() - self._offset def seek(self, offset: int, /, whence: int = 0) -> int: """Set file's current position. Parameters: offset: Position of file handle relative to position indicated by `whence`. whence: Relative position of `offset`. 0 (`os.SEEK_SET`) beginning of file (default). 1 (`os.SEEK_CUR`) current position. 2 (`os.SEEK_END`) end of file. """ assert self._fh is not None if self._offset: if whence == 0: return ( self._fh.seek(self._offset + offset, whence) - self._offset ) if whence == 2 and self._size > 0: return ( self._fh.seek(self._offset + self._size + offset, 0) - self._offset ) return self._fh.seek(offset, whence) def read(self, size: int = -1, /) -> bytes: """Return bytes read from file. Parameters: size: Number of bytes to read from file. By default, read until the end of the file. """ if size < 0 and self._offset: size = self._size assert self._fh is not None return self._fh.read(size) def readinto(self, buffer: bytes, /) -> int: """Read bytes from file into buffer. Parameters: buffer: Buffer to read into. Returns: Number of bytes read from file. """ assert self._fh is not None return self._fh.readinto(buffer) # type: ignore[attr-defined] def write(self, buffer: bytes, /) -> int: """Write bytes to file and return number of bytes written. Parameters: buffer: Bytes to write to file. Returns: Number of bytes written. """ assert self._fh is not None return self._fh.write(buffer) def flush(self) -> None: """Flush write buffers of stream if applicable.""" assert self._fh is not None if hasattr(self._fh, 'flush'): self._fh.flush() def memmap_array( self, dtype: DTypeLike, shape: tuple[int, ...], offset: int = 0, *, mode: str = 'r', order: str = 'C', ) -> NDArray[Any]: """Return `numpy.memmap` of array data stored in file. Parameters: dtype: Data type of array in file. shape: Shape of array in file. offset: Start position of array-data in file. mode: File is opened in this mode. The default is read-only. order: Order of ndarray memory layout. The default is 'C'. """ if not self.is_file: raise ValueError('cannot memory-map file without fileno') assert self._fh is not None return numpy.memmap( self._fh, # type: ignore[call-overload] dtype=dtype, mode=mode, offset=self._offset + offset, shape=shape, order=order, ) def read_array( self, dtype: DTypeLike, count: int = -1, offset: int = 0, *, out: NDArray[Any] | None = None, ) -> NDArray[Any]: """Return NumPy array from file in native byte order. Parameters: dtype: Data type of array to read. count: Number of items to read. By default, all items are read. offset: Start position of array-data in file. out: NumPy array to read into. By default, a new array is created. """ dtype = numpy.dtype(dtype) if count < 0: nbytes = self._size if out is None else out.nbytes count = nbytes // dtype.itemsize else: nbytes = count * dtype.itemsize result = numpy.empty(count, dtype) if out is None else out if result.nbytes != nbytes: raise ValueError('size mismatch') assert self._fh is not None if offset: self._fh.seek(self._offset + offset) try: n = self._fh.readinto(result) # type: ignore[attr-defined] except AttributeError: result[:] = numpy.frombuffer(self._fh.read(nbytes), dtype).reshape( result.shape ) n = nbytes if n != nbytes: raise ValueError(f'failed to read {nbytes} bytes, got {n}') if not result.dtype.isnative: if not dtype.isnative: result.byteswap(True) result = result.view(result.dtype.newbyteorder()) elif result.dtype.isnative != dtype.isnative: result.byteswap(True) if out is not None: if hasattr(out, 'flush'): out.flush() return result def read_record( self, dtype: DTypeLike, shape: tuple[int, ...] | int | None = 1, *, byteorder: Literal['S', '<', '>', '=', '|'] | None = None, ) -> numpy.recarray[Any, Any]: """Return NumPy record from file. Parameters: dtype: Data type of record array to read. shape: Shape of record array to read. byteorder: Byte order of record array to read. """ assert self._fh is not None dtype = numpy.dtype(dtype) if byteorder is not None: dtype = dtype.newbyteorder(byteorder) try: record = numpy.rec.fromfile( # type: ignore[call-overload] self._fh, dtype, shape ) except Exception: if shape is None: shape = self._size // dtype.itemsize size = product(sequence(shape)) * dtype.itemsize # data = bytearray(size) # n = self._fh.readinto(data) # data = data[:n] # TODO: record is not writable data = self._fh.read(size) record = numpy.rec.fromstring( data, dtype, shape, ) return record[0] if shape == 1 else record def write_empty(self, size: int, /) -> int: """Append null-bytes to file. The file position must be at the end of the file. Parameters: size: Number of null-bytes to write to file. """ if size < 1: return 0 assert self._fh is not None self._fh.seek(size - 1, os.SEEK_CUR) self._fh.write(b'\x00') return size def write_array( self, data: NDArray[Any], dtype: DTypeLike = None, /, ) -> int: """Write NumPy array to file in C contiguous order. Parameters: data: Array to write to file. """ assert self._fh is not None pos = self._fh.tell() # writing non-contiguous arrays is very slow data = numpy.ascontiguousarray(data, dtype) try: data.tofile(self._fh) except io.UnsupportedOperation: # numpy cannot write to BytesIO self._fh.write(data.tobytes()) return self._fh.tell() - pos def read_segments( self, offsets: Sequence[int], bytecounts: Sequence[int], /, indices: Sequence[int] | None = None, *, sort: bool = True, lock: threading.RLock | NullContext | None = None, buffersize: int | None = None, flat: bool = True, ) -> ( Iterator[tuple[bytes | None, int]] | Iterator[list[tuple[bytes | None, int]]] ): """Return iterator over segments read from file and their indices. The purpose of this function is to - reduce small or random reads. - reduce acquiring reentrant locks. - synchronize seeks and reads. - limit size of segments read into memory at once. (ThreadPoolExecutor.map is not collecting iterables lazily). Parameters: offsets: Offsets of segments to read from file. bytecounts: Byte counts of segments to read from file. indices: Indices of segments in image. The default is `range(len(offsets))`. sort: Read segments from file in order of their offsets. lock: Reentrant lock to synchronize seeks and reads. buffersize: Approximate number of bytes to read from file in one pass. The default is :py:attr:`_TIFF.BUFFERSIZE`. flat: If *True*, return iterator over individual (segment, index) tuples. Else, return an iterator over a list of (segment, index) tuples that were acquired in one pass. Yields: Individual or lists of `(segment, index)` tuples. """ # TODO: Cythonize this? assert self._fh is not None length = len(offsets) if length < 1: return if length == 1: index = 0 if indices is None else indices[0] if bytecounts[index] > 0 and offsets[index] > 0: if lock is None: lock = self._lock with lock: self.seek(offsets[index]) data = self._fh.read(bytecounts[index]) else: data = None yield (data, index) if flat else [(data, index)] return if lock is None: lock = self._lock if buffersize is None: buffersize = TIFF.BUFFERSIZE if indices is None: segments = [(i, offsets[i], bytecounts[i]) for i in range(length)] else: segments = [ (indices[i], offsets[i], bytecounts[i]) for i in range(length) ] if sort: segments = sorted(segments, key=lambda x: x[1]) iscontig = True for i in range(length - 1): _, offset, bytecount = segments[i] nextoffset = segments[i + 1][1] if offset == 0 or bytecount == 0 or nextoffset == 0: continue if offset + bytecount != nextoffset: iscontig = False break seek = self.seek read = self._fh.read result: list[tuple[bytes | None, int]] if iscontig: # consolidate reads i = 0 while i < length: j = i offset = -1 bytecount = 0 while bytecount <= buffersize and i < length: _, o, b = segments[i] if o > 0 and b > 0: if offset < 0: offset = o bytecount += b i += 1 if offset < 0: data = None else: with lock: seek(offset) data = read(bytecount) start = 0 stop = 0 result = [] while j < i: index, offset, bytecount = segments[j] if offset > 0 and bytecount > 0: stop += bytecount result.append( (data[start:stop], index) # type: ignore[index] ) start = stop else: result.append((None, index)) j += 1 if flat: yield from result else: yield result return i = 0 while i < length: result = [] size = 0 with lock: while size <= buffersize and i < length: index, offset, bytecount = segments[i] if offset > 0 and bytecount > 0: seek(offset) result.append((read(bytecount), index)) # buffer = bytearray(bytecount) # n = fh.readinto(buffer) # data.append(buffer[:n]) size += bytecount else: result.append((None, index)) i += 1 if flat: yield from result else: yield result def __enter__(self) -> FileHandle: return self def __exit__(self, exc_type: Any, exc_value: Any, traceback: Any) -> None: self.close() self._file = None # TODO: this may crash the Python interpreter under certain conditions # def __getattr__(self, name: str, /) -> Any: # """Return attribute from underlying file object.""" # if self._offset: # warnings.warn( # ' ' # f'{name} not implemented for embedded files', # UserWarning, # ) # return getattr(self._fh, name) def __repr__(self) -> str: return f'' def __str__(self) -> str: return '\n '.join( ( 'FileHandle', self._name, self._dir, f'{self._size} bytes', 'closed' if self._fh is None else 'open', ) ) @property def name(self) -> str: """Name of file or stream.""" return self._name @property def dirname(self) -> str: """Directory in which file is stored.""" return self._dir @property def path(self) -> str: """Absolute path of file.""" return os.path.join(self._dir, self._name) @property def extension(self) -> str: """File name extension of file or stream.""" name, ext = os.path.splitext(self._name.lower()) if ext and name.endswith('.ome'): ext = '.ome' + ext return ext @property def size(self) -> int: """Size of file in bytes.""" return self._size @property def closed(self) -> bool: """File is closed.""" return self._fh is None @property def lock(self) -> threading.RLock | NullContext: """Reentrant lock to synchronize reads and writes.""" return self._lock @lock.setter def lock(self, value: bool, /) -> None: self.set_lock(value) def set_lock(self, value: bool, /) -> None: if bool(value) == isinstance(self._lock, NullContext): self._lock = threading.RLock() if value else NullContext() @property def has_lock(self) -> bool: """A reentrant lock is currently used to sync reads and writes.""" return not isinstance(self._lock, NullContext) @property def is_file(self) -> bool: """File has fileno and can be memory-mapped.""" try: self._fh.fileno() # type: ignore[union-attr] return True except Exception: return False @final class FileCache: """Keep FileHandles open. Parameters: size: Maximum number of files to keep open. The default is 8. lock: Reentrant lock to synchronize reads and writes. """ __slots__ = ('files', 'keep', 'past', 'lock', 'size') size: int """Maximum number of files to keep open.""" files: dict[FileHandle, int] """Reference counts of opened files.""" keep: set[FileHandle] """Set of files to keep open.""" past: list[FileHandle] """FIFO list of opened files.""" lock: threading.RLock | NullContext """Reentrant lock to synchronize reads and writes.""" def __init__( self, size: int | None = None, *, lock: threading.RLock | NullContext | None = None, ) -> None: self.past = [] self.files = {} self.keep = set() self.size = 8 if size is None else int(size) self.lock = NullContext() if lock is None else lock def open(self, fh: FileHandle, /) -> None: """Open file, re-open if necessary.""" with self.lock: if fh in self.files: self.files[fh] += 1 elif fh.closed: fh.open() self.files[fh] = 1 self.past.append(fh) else: self.files[fh] = 2 self.keep.add(fh) self.past.append(fh) def close(self, fh: FileHandle, /) -> None: """Close least recently used open files.""" with self.lock: if fh in self.files: self.files[fh] -= 1 self._trim() def clear(self) -> None: """Close all opened files if not in use when opened first.""" with self.lock: for fh, refcount in list(self.files.items()): if fh not in self.keep: fh.close() del self.files[fh] del self.past[self.past.index(fh)] def read( self, fh: FileHandle, /, offset: int, bytecount: int, whence: int = 0, ) -> bytes: """Return bytes read from binary file. Parameters: fh: File handle to read from. offset: Position in file to start reading from relative to the position indicated by `whence`. bytecount: Number of bytes to read. whence: Relative position of offset. 0 (`os.SEEK_SET`) beginning of file (default). 1 (`os.SEEK_CUR`) current position. 2 (`os.SEEK_END`) end of file. """ # this function is more efficient than # filecache.open(fh) # with lock: # fh.seek() # data = fh.read() # filecache.close(fh) with self.lock: b = fh not in self.files if b: if fh.closed: fh.open() self.files[fh] = 0 else: self.files[fh] = 1 self.keep.add(fh) self.past.append(fh) fh.seek(offset, whence) data = fh.read(bytecount) if b: self._trim() return data def write( self, fh: FileHandle, /, offset: int, data: bytes, whence: int = 0, ) -> int: """Write bytes to binary file. Parameters: fh: File handle to write to. offset: Position in file to start writing from relative to the position indicated by `whence`. value: Bytes to write. whence: Relative position of offset. 0 (`os.SEEK_SET`) beginning of file (default). 1 (`os.SEEK_CUR`) current position. 2 (`os.SEEK_END`) end of file. """ with self.lock: b = fh not in self.files if b: if fh.closed: fh.open() self.files[fh] = 0 else: self.files[fh] = 1 self.keep.add(fh) self.past.append(fh) fh.seek(offset, whence) written = fh.write(data) if b: self._trim() return written def _trim(self) -> None: """Trim file cache.""" index = 0 size = len(self.past) while index < size > self.size: fh = self.past[index] if fh not in self.keep and self.files[fh] <= 0: fh.close() del self.files[fh] del self.past[index] size -= 1 else: index += 1 def __len__(self) -> int: """Return number of open files.""" return len(self.files) def __repr__(self) -> str: return f'' @final class StoredShape(Sequence[int]): """Normalized shape of image array in TIFF pages. Parameters: frames: Number of TIFF pages. separate_samples: Number of separate samples. depth: Image depth. length: Image length (height). width: Image width. contig_samples: Number of contiguous samples. extrasamples: Number of extra samples. """ __slots__ = ( 'frames', 'separate_samples', 'depth', 'length', 'width', 'contig_samples', 'extrasamples', ) frames: int """Number of TIFF pages.""" separate_samples: int """Number of separate samples.""" depth: int """Image depth. Value of ImageDepth tag or 1.""" length: int """Image length (height). Value of ImageLength tag.""" width: int """Image width. Value of ImageWidth tag.""" contig_samples: int """Number of contiguous samples.""" extrasamples: int """Number of extra samples. Count of ExtraSamples tag or 0.""" def __init__( self, frames: int = 1, separate_samples: int = 1, depth: int = 1, length: int = 1, width: int = 1, contig_samples: int = 1, extrasamples: int = 0, ) -> None: if separate_samples != 1 and contig_samples != 1: raise ValueError('invalid samples') self.frames = int(frames) self.separate_samples = int(separate_samples) self.depth = int(depth) self.length = int(length) self.width = int(width) self.contig_samples = int(contig_samples) self.extrasamples = int(extrasamples) @property def size(self) -> int: """Product of all dimensions.""" return ( abs(self.frames) * self.separate_samples * self.depth * self.length * self.width * self.contig_samples ) @property def samples(self) -> int: """Number of samples. Count of SamplesPerPixel tag.""" assert self.separate_samples == 1 or self.contig_samples == 1 samples = ( self.separate_samples if self.separate_samples > 1 else self.contig_samples ) assert self.extrasamples < samples return samples @property def photometric_samples(self) -> int: """Number of photometric samples.""" return self.samples - self.extrasamples @property def shape(self) -> tuple[int, int, int, int, int, int]: """Normalized 6D shape of image array in all pages.""" return ( self.frames, self.separate_samples, self.depth, self.length, self.width, self.contig_samples, ) @property def page_shape(self) -> tuple[int, int, int, int, int]: """Normalized 5D shape of image array in single page.""" return ( self.separate_samples, self.depth, self.length, self.width, self.contig_samples, ) @property def page_size(self) -> int: """Product of dimensions in single page.""" return ( self.separate_samples * self.depth * self.length * self.width * self.contig_samples ) @property def squeezed(self) -> tuple[int, ...]: """Shape with length-1 removed, except for length and width.""" shape = [self.length, self.width] if self.separate_samples > 1: shape.insert(0, self.separate_samples) elif self.contig_samples > 1: shape.append(self.contig_samples) if self.frames > 1: shape.insert(0, self.frames) return tuple(shape) @property def is_valid(self) -> bool: """Shape is valid.""" return ( self.frames >= 1 and self.depth >= 1 and self.length >= 1 and self.width >= 1 and (self.separate_samples == 1 or self.contig_samples == 1) and ( self.contig_samples if self.contig_samples > 1 else self.separate_samples ) > self.extrasamples ) @property def is_planar(self) -> bool: """Shape contains planar samples.""" return self.separate_samples > 1 @property def planarconfig(self) -> int | None: """Value of PlanarConfiguration tag.""" if self.separate_samples > 1: return 2 # PLANARCONFIG.SEPARATE if self.contig_samples > 1: return 1 # PLANARCONFIG.CONTIG return None def __len__(self) -> int: return 6 @overload def __getitem__(self, key: int, /) -> int: ... @overload def __getitem__(self, key: slice, /) -> tuple[int, ...]: ... def __getitem__(self, key: int | slice, /) -> int | tuple[int, ...]: return ( self.frames, self.separate_samples, self.depth, self.length, self.width, self.contig_samples, )[key] def __eq__(self, other: object, /) -> bool: return ( isinstance(other, StoredShape) and self.frames == other.frames and self.separate_samples == other.separate_samples and self.depth == other.depth and self.length == other.length and self.width == other.width and self.contig_samples == other.contig_samples ) def __repr__(self) -> str: return ( '' ) @final class NullContext: """Null context manager. Can be used as a dummy reentrant lock. >>> with NullContext(): ... pass ... """ __slots__ = () def __enter__(self) -> NullContext: return self def __exit__(self, exc_type: Any, exc_value: Any, traceback: Any) -> None: pass def __repr__(self) -> str: return 'NullContext()' @final class Timer: """Stopwatch for timing execution speed. Parameters: message: Message to print. end: End of print statement. started: Value of performance counter when started. The default is the current performance counter. Examples: >>> with Timer('sleep:'): ... time.sleep(1.05) sleep: 1.0... s """ __slots__ = ('started', 'stopped', 'duration') started: float """Value of performance counter when started.""" stopped: float """Value of performance counter when stopped.""" duration: float """Duration between `started` and `stopped` in seconds.""" def __init__( self, message: str | None = None, *, end: str = ' ', started: float | None = None, ) -> None: if message is not None: print(message, end=end, flush=True) self.duration = 0.0 if started is None: started = time.perf_counter() self.started = self.stopped = started def start(self, message: str | None = None, *, end: str = ' ') -> float: """Start timer and return current time.""" if message is not None: print(message, end=end, flush=True) self.duration = 0.0 self.started = self.stopped = time.perf_counter() return self.started def stop(self, message: str | None = None, *, end: str = ' ') -> float: """Return duration of timer till start. Parameters: message: Message to print. end: End of print statement. """ self.stopped = time.perf_counter() if message is not None: print(message, end=end, flush=True) self.duration = self.stopped - self.started return self.duration def print( self, message: str | None = None, *, end: str | None = None ) -> None: """Print duration from timer start till last stop or now. Parameters: message: Message to print. end: End of print statement. """ msg = str(self) if message is not None: print(message, end=' ') print(msg, end=end, flush=True) @staticmethod def clock() -> float: """Return value of performance counter.""" return time.perf_counter() def __str__(self) -> str: """Return duration from timer start till last stop or now.""" if self.duration <= 0.0: # not stopped duration = time.perf_counter() - self.started else: duration = self.duration s = str(TimeDelta(seconds=duration)) i = 0 while i < len(s) and s[i : i + 2] in '0:0010203040506070809': i += 1 if s[i : i + 1] == ':': i += 1 return f'{s[i:]} s' def __repr__(self) -> str: return f'Timer(started={self.started})' def __enter__(self) -> Timer: return self def __exit__(self, exc_type: Any, exc_value: Any, traceback: Any) -> None: self.print() class OmeXmlError(Exception): """Exception to indicate invalid OME-XML or unsupported cases.""" @final class OmeXml: """Create OME-TIFF XML metadata. Parameters: **metadata: Additional OME-XML attributes or elements to be stored. Creator: Name of creating application. The default is 'tifffile'. UUID: Unique identifier. Examples: >>> omexml = OmeXml() >>> omexml.addimage( ... dtype='uint16', ... shape=(32, 256, 256), ... storedshape=(32, 1, 1, 256, 256, 1), ... axes='CYX', ... Name='First Image', ... PhysicalSizeX=2.0, ... MapAnnotation={'key': 'value'}, ... Dataset={'Name': 'FirstDataset'}, ... ) >>> xml = omexml.tostring() >>> xml '......' >>> OmeXml.validate(xml) True """ images: list[str] """OME-XML Image elements.""" annotations: list[str] """OME-XML Annotation elements.""" datasets: list[str] """OME-XML Dataset elements.""" _xml: str _ifd: int def __init__(self, **metadata: Any) -> None: metadata = metadata.get('OME', metadata) self._ifd = 0 self.images = [] self.annotations = [] self.datasets = [] # TODO: parse other OME elements from metadata # Project # Folder # Experiment # Plate # Screen # Experimenter # ExperimenterGroup # Instrument # ROI if 'UUID' in metadata: uuid = metadata['UUID'].split(':')[-1] else: from uuid import uuid1 uuid = str(uuid1()) creator = OmeXml._attribute( metadata, 'Creator', default=f'tifffile.py {__version__}' ) schema = 'http://www.openmicroscopy.org/Schemas/OME/2016-06' self._xml = ( '{declaration}' f'' '{datasets}' '{images}' '{annotations}' '' ) def addimage( self, dtype: DTypeLike, shape: Sequence[int], storedshape: tuple[int, int, int, int, int, int], *, axes: str | None = None, **metadata: Any, ) -> None: """Add image to OME-XML. The OME model can handle up to 9 dimensional images for selected axes orders. Refer to the OME-XML specification for details. Non-TZCYXS (modulo) dimensions must be after a TZC dimension or require an unused TZC dimension. Parameters: dtype: Data type of image array. shape: Shape of image array. storedshape: Normalized shape describing how image array is stored in TIFF file as (pages, separate_samples, depth, length, width, contig_samples). axes: Character codes for dimensions in `shape`. By default, `axes` is determined from the DimensionOrder metadata attribute or matched to the `shape` in reverse order of TZC(S)YX(S) based on `storedshape`. The following codes are supported: 'S' sample, 'X' width, 'Y' length, 'Z' depth, 'C' channel, 'T' time, 'A' angle, 'P' phase, 'R' tile, 'H' lifetime, 'E' lambda, 'Q' other. **metadata: Additional OME-XML attributes or elements to be stored. Image/Pixels: Name, Description, DimensionOrder, TypeDescription, PhysicalSizeX, PhysicalSizeXUnit, PhysicalSizeY, PhysicalSizeYUnit, PhysicalSizeZ, PhysicalSizeZUnit, TimeIncrement, TimeIncrementUnit, StructuredAnnotations, BooleanAnnotation, DoubleAnnotation, LongAnnotation, CommentAnnotation, MapAnnotation, Dataset Per Plane: DeltaT, DeltaTUnit, ExposureTime, ExposureTimeUnit, PositionX, PositionXUnit, PositionY, PositionYUnit, PositionZ, PositionZUnit. Per Channel: Name, AcquisitionMode, Color, ContrastMethod, EmissionWavelength, EmissionWavelengthUnit, ExcitationWavelength, ExcitationWavelengthUnit, Fluor, IlluminationType, NDFilter, PinholeSize, PinholeSizeUnit, PockelCellSetting. Raises: OmeXmlError: Image format not supported. """ index = len(self.images) annotation_refs = [] # get Image and Pixels metadata metadata = metadata.get('OME', metadata) metadata = metadata.get('Image', metadata) if isinstance(metadata, (list, tuple)): # multiple images metadata = metadata[index] if 'Pixels' in metadata: # merge with Image import copy metadata = copy.deepcopy(metadata) if 'ID' in metadata['Pixels']: del metadata['Pixels']['ID'] metadata.update(metadata['Pixels']) del metadata['Pixels'] try: dtype = numpy.dtype(dtype).name dtype = { 'int8': 'int8', 'int16': 'int16', 'int32': 'int32', 'uint8': 'uint8', 'uint16': 'uint16', 'uint32': 'uint32', 'float32': 'float', 'float64': 'double', 'complex64': 'complex', 'complex128': 'double-complex', 'bool': 'bit', }[dtype] except KeyError as exc: raise OmeXmlError(f'data type {dtype!r} not supported') from exc if metadata.get('Type', dtype) != dtype: raise OmeXmlError( f'metadata Pixels Type {metadata["Type"]!r} ' f'does not match array dtype {dtype!r}' ) samples = 1 planecount, separate, depth, length, width, contig = storedshape if depth != 1: raise OmeXmlError('ImageDepth not supported') if not (separate == 1 or contig == 1): raise ValueError('invalid stored shape') shape = tuple(int(i) for i in shape) ndim = len(shape) if ndim < 1 or product(shape) <= 0: raise OmeXmlError('empty arrays not supported') if axes is None: # get axes from shape, stored shape, and DimensionOrder if contig != 1 or shape[-3:] == (length, width, 1): axes = 'YXS' samples = contig elif separate != 1 or ( ndim == 6 and shape[-3:] == (1, length, width) ): axes = 'SYX' samples = separate else: axes = 'YX' if not len(axes) <= ndim <= (6 if 'S' in axes else 5): raise OmeXmlError(f'{ndim} dimensions not supported') hiaxes: str = metadata.get('DimensionOrder', 'XYCZT')[:1:-1] axes = hiaxes[(6 if 'S' in axes else 5) - ndim :] + axes assert len(axes) == len(shape) else: # validate axes against shape and stored shape axes = axes.upper() if len(axes) != len(shape): raise ValueError('axes do not match shape') if not ( axes.endswith('YX') or axes.endswith('YXS') or (axes.endswith('YXC') and 'S' not in axes) ): raise OmeXmlError('dimensions must end with YX or YXS') unique = [] for ax in axes: if ax not in 'TZCYXSAPRHEQ': raise OmeXmlError(f'dimension {ax!r} not supported') if ax in unique: raise OmeXmlError(f'multiple {ax!r} dimensions') unique.append(ax) if ndim > (9 if 'S' in axes else 8): raise OmeXmlError('more than 8 dimensions not supported') if contig != 1: samples = contig if ndim < 3: raise ValueError('dimensions do not match stored shape') if axes[-1] == 'C': # allow C axis instead of S if 'S' in axes: raise ValueError('invalid axes') axes = axes.replace('C', 'S') elif axes[-1] != 'S': raise ValueError('axes do not match stored shape') if shape[-1] != contig or shape[-2] != width: raise ValueError('shape does not match stored shape') elif separate != 1: samples = separate if ndim < 3: raise ValueError('dimensions do not match stored shape') if axes[-3] == 'C': # allow C axis instead of S if 'S' in axes: raise ValueError('invalid axes') axes = axes.replace('C', 'S') elif axes[-3] != 'S': raise ValueError('axes do not match stored shape') if shape[-3] != separate or shape[-1] != width: raise ValueError('shape does not match stored shape') if shape[axes.index('X')] != width or shape[axes.index('Y')] != length: raise ValueError('shape does not match stored shape') if 'S' in axes: hiaxes = axes[: min(axes.index('S'), axes.index('Y'))] else: hiaxes = axes[: axes.index('Y')] if any(ax in 'APRHEQ' for ax in hiaxes): # modulo axes modulo = {} dimorder = '' axestype = { 'A': 'angle', 'P': 'phase', 'R': 'tile', 'H': 'lifetime', 'E': 'lambda', 'Q': 'other', } axestypedescr = metadata.get('TypeDescription', {}) for i, ax in enumerate(hiaxes): if ax in 'APRHEQ': if ax in axestypedescr: typedescr = f'TypeDescription="{axestypedescr[ax]}" ' else: typedescr = '' x = hiaxes[i - 1 : i] if x and x in 'TZC': # use previous axis modulo[x] = axestype[ax], shape[i], typedescr else: # use next unused axis for x in 'TZC': if ( x not in dimorder and x not in hiaxes and x not in modulo ): modulo[x] = axestype[ax], shape[i], typedescr dimorder += x break else: # TODO: support any order of axes, such as, APRTZC raise OmeXmlError('more than 3 modulo dimensions') else: dimorder += ax hiaxes = dimorder # TODO: use user-specified start, stop, step, or labels moduloalong = ''.join( f'' for ax, (axtype, size, typedescr) in modulo.items() ) annotation_refs.append( f'' ) self.annotations.append( f'' '' '' f'{moduloalong}' '' '' '' ) else: modulo = {} annotationref = '' hiaxes = hiaxes[::-1] for dimorder in ( metadata.get('DimensionOrder', 'XYCZT'), 'XYCZT', 'XYZCT', 'XYZTC', 'XYCTZ', 'XYTCZ', 'XYTZC', ): if hiaxes in dimorder: break else: raise OmeXmlError( f'dimension order {axes!r} not supported ({hiaxes=})' ) dimsizes = [] for ax in dimorder: if ax == 'S': continue if ax in axes: size = shape[axes.index(ax)] else: size = 1 if ax == 'C': sizec = size size *= samples if ax in modulo: size *= modulo[ax][1] dimsizes.append(size) sizes = ''.join( f' Size{ax}="{size}"' for ax, size in zip(dimorder, dimsizes) ) # verify DimensionOrder in metadata is compatible if 'DimensionOrder' in metadata: omedimorder = metadata['DimensionOrder'] omedimorder = ''.join( ax for ax in omedimorder if dimsizes[dimorder.index(ax)] > 1 ) if hiaxes not in omedimorder: raise OmeXmlError( f'metadata DimensionOrder does not match {axes!r}' ) # verify metadata Size values match shape for ax, size in zip(dimorder, dimsizes): if metadata.get(f'Size{ax}', size) != size: raise OmeXmlError( f'metadata Size{ax} does not match {shape!r}' ) dimsizes[dimorder.index('C')] //= samples if planecount != product(dimsizes[2:]): raise ValueError('shape does not match stored shape') plane_list = [] planeattributes = metadata.get('Plane', '') if planeattributes: cztorder = tuple(dimorder[2:].index(ax) for ax in 'CZT') for p in range(planecount): attributes = OmeXml._attributes( planeattributes, p, 'DeltaT', 'DeltaTUnit', 'ExposureTime', 'ExposureTimeUnit', 'PositionX', 'PositionXUnit', 'PositionY', 'PositionYUnit', 'PositionZ', 'PositionZUnit', ) unraveled = numpy.unravel_index(p, dimsizes[2:], order='F') c, z, t = (int(unraveled[i]) for i in cztorder) plane_list.append( f'' ) # TODO: if possible, verify c, z, t match planeattributes planes = ''.join(plane_list) channel_list = [] for c in range(sizec): lightpath = '' # TODO: use LightPath elements from metadata # 'AnnotationRef', # 'DichroicRef', # 'EmissionFilterRef', # 'ExcitationFilterRef' attributes = OmeXml._attributes( metadata.get('Channel', ''), c, 'Name', 'AcquisitionMode', 'Color', 'ContrastMethod', 'EmissionWavelength', 'EmissionWavelengthUnit', 'ExcitationWavelength', 'ExcitationWavelengthUnit', 'Fluor', 'IlluminationType', 'NDFilter', 'PinholeSize', 'PinholeSizeUnit', 'PockelCellSetting', ) channel_list.append( f'' f'{lightpath}' '' ) channels = ''.join(channel_list) # TODO: support more Image elements elements = OmeXml._elements(metadata, 'AcquisitionDate', 'Description') name = OmeXml._attribute(metadata, 'Name', default=f'Image{index}') attributes = OmeXml._attributes( metadata, None, 'SignificantBits', 'PhysicalSizeX', 'PhysicalSizeXUnit', 'PhysicalSizeY', 'PhysicalSizeYUnit', 'PhysicalSizeZ', 'PhysicalSizeZUnit', 'TimeIncrement', 'TimeIncrementUnit', ) if separate > 1 or contig > 1: interleaved = 'false' if separate > 1 else 'true' interleaved = f' Interleaved="{interleaved}"' else: interleaved = '' self._dataset( metadata.get('Dataset', {}), f'' ) self._annotations( metadata.get('StructuredAnnotations', metadata), annotation_refs ) annotationref = ''.join(annotation_refs) self.images.append( f'' f'{elements}' f'' f'{channels}' f'' f'{planes}' '' f'{annotationref}' '' ) self._ifd += planecount def tostring(self, *, declaration: bool = False) -> str: """Return OME-XML string. Parameters: declaration: Include XML declaration. """ # TODO: support other top-level elements datasets = ''.join(self.datasets) images = ''.join(self.images) annotations = ''.join(self.annotations) if annotations: annotations = ( f'{annotations}' ) if declaration: declaration_str = '' else: declaration_str = '' xml = self._xml.format( declaration=declaration_str, images=images, annotations=annotations, datasets=datasets, ) return xml def __repr__(self) -> str: return f'' def __str__(self) -> str: """Return OME-XML string.""" xml = self.tostring() try: from lxml import etree parser = etree.XMLParser(remove_blank_text=True) tree = etree.fromstring(xml, parser) xml = etree.tostring( tree, encoding='utf-8', pretty_print=True, xml_declaration=True ).decode() except ImportError: pass except Exception as exc: warnings.warn( f' {exc.__class__.__name__}: {exc}', UserWarning, ) return xml @staticmethod def _escape(value: object, /) -> str: """Return escaped string of value.""" if not isinstance(value, str): value = str(value) elif '&' in value or '>' in value or '<' in value: return value value = value.replace('&', '&') value = value.replace('>', '>') value = value.replace('<', '<') return value @staticmethod def _element( metadata: dict[str, Any], name: str, default: str | None = None ) -> str: """Return XML formatted element if name in metadata.""" value = metadata.get(name, default) if value is None: return '' return f'<{name}>{OmeXml._escape(value)}' @staticmethod def _elements(metadata: dict[str, Any], /, *names: str) -> str: """Return XML formatted elements.""" if not metadata: return '' elements = (OmeXml._element(metadata, name) for name in names) return ''.join(e for e in elements if e) @staticmethod def _attribute( metadata: dict[str, Any], name: str, /, index: int | None = None, default: Any = None, ) -> str: """Return XML formatted attribute if name in metadata.""" value = metadata.get(name, default) if value is None: return '' if index is not None: if isinstance(value, (list, tuple)): try: value = value[index] except IndexError as exc: raise IndexError( f'list index out of range for attribute {name!r}' ) from exc elif index > 0: raise TypeError( f'{type(value).__name__!r} is not a list or tuple' ) return f' {name}="{OmeXml._escape(value)}"' @staticmethod def _attributes( metadata: dict[str, Any], index_: int | None, /, *names: str, ) -> str: """Return XML formatted attributes.""" if not metadata: return '' if index_ is None: attributes = (OmeXml._attribute(metadata, name) for name in names) elif isinstance(metadata, (list, tuple)): metadata = metadata[index_] attributes = (OmeXml._attribute(metadata, name) for name in names) elif isinstance(metadata, dict): attributes = ( OmeXml._attribute(metadata, name, index_) for name in names ) return ''.join(a for a in attributes if a) def _dataset(self, metadata: dict[str, Any] | None, imageref: str) -> None: """Add Dataset element to self.datasets.""" index = len(self.datasets) if metadata is None: # dataset explicitly disabled return None if not metadata and index == 0: # no dataset provided yet return None if not metadata: # use previous dataset index -= 1 if '', f'{imageref}' ) return None # new dataset name = metadata.get('Name', '') if name: name = f' Name="{OmeXml._escape(name)}"' description = metadata.get('Description', '') if description: description = ( f'{OmeXml._escape(description)}' ) annotation_refs: list[str] = [] self._annotations(metadata, annotation_refs) annotationref = ''.join(annotation_refs) self.datasets.append( f'' f'{description}' f'{imageref}' f'{annotationref}' '' ) return None # f'' def _annotations( self, metadata: dict[str, Any], annotation_refs: list[str] ) -> None: """Add annotations to self.annotations and annotation_refs.""" values: Any for name, values in metadata.items(): if name not in { 'BooleanAnnotation', 'DoubleAnnotation', 'LongAnnotation', 'CommentAnnotation', 'MapAnnotation', # 'FileAnnotation', # 'ListAnnotation', # 'TimestampAnnotation, # 'XmlAnnotation', }: continue if not values: continue if not isinstance(values, (list, tuple)): values = [values] for value in values: namespace = '' description = '' if isinstance(value, dict): value = value.copy() description = value.pop('Description', '') if description: description = ( '' f'{OmeXml._escape(description)}' '' ) namespace = value.pop('Namespace', '') if namespace: namespace = f' Namespace="{OmeXml._escape(namespace)}"' value = value.pop('Value', value) if name == 'MapAnnotation': if not isinstance(value, dict): raise ValueError('MapAnnotation is not a dict') values = [ f'{OmeXml._escape(v)}' for k, v in value.items() ] elif name == 'BooleanAnnotation': values = [f'{bool(value)}'.lower()] else: values = [OmeXml._escape(str(value))] annotation_refs.append( f'' ) self.annotations.append( ''.join( ( f'<{name} ' f'ID="Annotation:{len(self.annotations)}"' f'{namespace}>', description, '', ''.join(values), '', f'', ) ) ) @staticmethod def validate( omexml: str, /, omexsd: bytes | None = None, assert_: bool = True, *, _schema: list[Any] = [], # etree.XMLSchema ) -> bool | None: r"""Return if OME-XML is valid according to XMLSchema. Parameters: omexml: OME-XML string to validate. omexsd: Content of OME-XSD schema to validate against. By default, the 2016-06 OME XMLSchema is downloaded on first run. assert\_: Raise AssertionError if validation fails. _schema: Internal use. Raises: AssertionError: Validation failed and `assert\_` is *True*. """ from lxml import etree if not _schema: if omexsd is None: omexsd_path = os.path.join( os.path.dirname(__file__), 'ome.xsd' ) if os.path.exists(omexsd_path): with open(omexsd_path, 'rb') as fh: omexsd = fh.read() else: import urllib.request with urllib.request.urlopen( 'https://www.openmicroscopy.org/' 'Schemas/OME/2016-06/ome.xsd' ) as fh: omexsd = fh.read() if omexsd.startswith(b'', 1)[-1] try: _schema.append( etree.XMLSchema(etree.fromstring(omexsd.decode())) ) except Exception: # raise _schema.append(None) if _schema and _schema[0] is not None: if omexml.startswith('', 1)[-1] tree = etree.fromstring(omexml) if assert_: _schema[0].assert_(tree) return True return bool(_schema[0].validate(tree)) return None @final class CompressionCodec(Mapping[int, Callable[..., object]]): """Map :py:class:`COMPRESSION` value to encode or decode function. Parameters: encode: If *True*, return encode functions, else decode functions. """ _codecs: dict[int, Callable[..., Any]] _encode: bool def __init__(self, encode: bool) -> None: self._codecs = {1: identityfunc} self._encode = bool(encode) def __getitem__(self, key: int, /) -> Callable[..., Any]: if key in self._codecs: return self._codecs[key] codec: Callable[..., Any] try: # TODO: enable CCITTRLE decoder for future imagecodecs # if key == 2: # if self._encode: # codec = imagecodecs.ccittrle_encode # else: # codec = imagecodecs.ccittrle_decode if key == 5: if self._encode: codec = imagecodecs.lzw_encode else: codec = imagecodecs.lzw_decode elif key in {6, 7, 33007}: if self._encode: if key in {6, 33007}: raise NotImplementedError codec = imagecodecs.jpeg_encode else: codec = imagecodecs.jpeg_decode elif key in {8, 32946, 50013}: if ( hasattr(imagecodecs, 'DEFLATE') and imagecodecs.DEFLATE.available ): # imagecodecs built with deflate if self._encode: codec = imagecodecs.deflate_encode else: codec = imagecodecs.deflate_decode elif ( hasattr(imagecodecs, 'ZLIB') and imagecodecs.ZLIB.available ): if self._encode: codec = imagecodecs.zlib_encode else: codec = imagecodecs.zlib_decode else: # imagecodecs built without zlib try: from . import _imagecodecs except ImportError: import _imagecodecs # type: ignore[no-redef] if self._encode: codec = _imagecodecs.zlib_encode else: codec = _imagecodecs.zlib_decode elif key == 32773: if self._encode: codec = imagecodecs.packbits_encode else: codec = imagecodecs.packbits_decode elif key in {33003, 33004, 33005, 34712}: if self._encode: codec = imagecodecs.jpeg2k_encode else: codec = imagecodecs.jpeg2k_decode elif key == 34887: if self._encode: codec = imagecodecs.lerc_encode else: codec = imagecodecs.lerc_decode elif key == 34892: # DNG lossy if self._encode: codec = imagecodecs.jpeg8_encode else: codec = imagecodecs.jpeg8_decode elif key == 34925: if hasattr(imagecodecs, 'LZMA') and imagecodecs.LZMA.available: if self._encode: codec = imagecodecs.lzma_encode else: codec = imagecodecs.lzma_decode else: # imagecodecs built without lzma try: from . import _imagecodecs except ImportError: import _imagecodecs # type: ignore[no-redef] if self._encode: codec = _imagecodecs.lzma_encode else: codec = _imagecodecs.lzma_decode elif key == 34933: if self._encode: codec = imagecodecs.png_encode else: codec = imagecodecs.png_decode elif key in {34934, 22610}: if self._encode: codec = imagecodecs.jpegxr_encode else: codec = imagecodecs.jpegxr_decode elif key == 48124: if self._encode: codec = imagecodecs.jetraw_encode else: codec = imagecodecs.jetraw_decode elif key in {50000, 34926}: # 34926 deprecated if self._encode: codec = imagecodecs.zstd_encode else: codec = imagecodecs.zstd_decode elif key in {50001, 34927}: # 34927 deprecated if self._encode: codec = imagecodecs.webp_encode else: codec = imagecodecs.webp_decode elif key in {65000, 65001, 65002} and not self._encode: codec = imagecodecs.eer_decode elif key in {50002, 52546}: if self._encode: codec = imagecodecs.jpegxl_encode else: codec = imagecodecs.jpegxl_decode else: try: msg = f'{COMPRESSION(key)!r} not supported' except ValueError: msg = f'{key} is not a known COMPRESSION' raise KeyError(msg) except (AttributeError, ImportError) as exc: raise KeyError( f'{COMPRESSION(key)!r} ' "requires the 'imagecodecs' package" ) from exc except NotImplementedError as exc: raise KeyError(f'{COMPRESSION(key)!r} not implemented') from exc self._codecs[key] = codec return codec def __contains__(self, key: Any, /) -> bool: try: self[key] except KeyError: return False return True def __iter__(self) -> Iterator[int]: yield 1 # dummy def __len__(self) -> int: return 1 # dummy @final class PredictorCodec(Mapping[int, Callable[..., object]]): """Map :py:class:`PREDICTOR` value to encode or decode function. Parameters: encode: If *True*, return encode functions, else decode functions. """ _codecs: dict[int, Callable[..., Any]] _encode: bool def __init__(self, encode: bool) -> None: self._codecs = {1: identityfunc} self._encode = bool(encode) def __getitem__(self, key: int, /) -> Callable[..., Any]: if key in self._codecs: return self._codecs[key] codec: Callable[..., Any] try: if key == 2: if self._encode: codec = imagecodecs.delta_encode else: codec = imagecodecs.delta_decode elif key == 3: if self._encode: codec = imagecodecs.floatpred_encode else: codec = imagecodecs.floatpred_decode elif key == 34892: if self._encode: def codec(data, axis=-1, out=None): return imagecodecs.delta_encode( data, axis=axis, out=out, dist=2 ) else: def codec(data, axis=-1, out=None): return imagecodecs.delta_decode( data, axis=axis, out=out, dist=2 ) elif key == 34893: if self._encode: def codec(data, axis=-1, out=None): return imagecodecs.delta_encode( data, axis=axis, out=out, dist=4 ) else: def codec(data, axis=-1, out=None): return imagecodecs.delta_decode( data, axis=axis, out=out, dist=4 ) elif key == 34894: if self._encode: def codec(data, axis=-1, out=None): return imagecodecs.floatpred_encode( data, axis=axis, out=out, dist=2 ) else: def codec(data, axis=-1, out=None): return imagecodecs.floatpred_decode( data, axis=axis, out=out, dist=2 ) elif key == 34895: if self._encode: def codec(data, axis=-1, out=None): return imagecodecs.floatpred_encode( data, axis=axis, out=out, dist=4 ) else: def codec(data, axis=-1, out=None): return imagecodecs.floatpred_decode( data, axis=axis, out=out, dist=4 ) else: raise KeyError(f'{key} is not a known PREDICTOR') except AttributeError as exc: raise KeyError( f'{PREDICTOR(key)!r}' " requires the 'imagecodecs' package" ) from exc except NotImplementedError as exc: raise KeyError(f'{PREDICTOR(key)!r} not implemented') from exc self._codecs[key] = codec return codec def __contains__(self, key: Any, /) -> bool: try: self[key] except KeyError: return False return True def __iter__(self) -> Iterator[int]: yield 1 # dummy def __len__(self) -> int: return 1 # dummy class DATATYPE(enum.IntEnum): """TIFF tag data types.""" BYTE = 1 """8-bit unsigned integer.""" ASCII = 2 """8-bit byte with last byte null, containing 7-bit ASCII code.""" SHORT = 3 """16-bit unsigned integer.""" LONG = 4 """32-bit unsigned integer.""" RATIONAL = 5 """Two 32-bit unsigned integers, numerator and denominator of fraction.""" SBYTE = 6 """8-bit signed integer.""" UNDEFINED = 7 """8-bit byte that may contain anything.""" SSHORT = 8 """16-bit signed integer.""" SLONG = 9 """32-bit signed integer.""" SRATIONAL = 10 """Two 32-bit signed integers, numerator and denominator of fraction.""" FLOAT = 11 """Single precision (4-byte) IEEE format.""" DOUBLE = 12 """Double precision (8-byte) IEEE format.""" IFD = 13 """Unsigned 4 byte IFD offset.""" UNICODE = 14 """UTF-16 (2-byte) unicode string.""" COMPLEX = 15 """Single precision (8-byte) complex number.""" LONG8 = 16 """Unsigned 8 byte integer (BigTIFF).""" SLONG8 = 17 """Signed 8 byte integer (BigTIFF).""" IFD8 = 18 """Unsigned 8 byte IFD offset (BigTIFF).""" class COMPRESSION(enum.IntEnum): """Values of Compression tag. Compression scheme used on image data. """ NONE = 1 """No compression (default).""" CCITTRLE = 2 # CCITT 1D CCITT_T4 = 3 # T4/Group 3 Fax CCITT_T6 = 4 # T6/Group 4 Fax LZW = 5 """Lempel-Ziv-Welch.""" OJPEG = 6 # old-style JPEG JPEG = 7 """New style JPEG.""" ADOBE_DEFLATE = 8 """Deflate, aka ZLIB.""" JBIG_BW = 9 # VC5 JBIG_COLOR = 10 JPEG_99 = 99 # Leaf MOS lossless JPEG IMPACJ = 103 # Pegasus Imaging Corporation DCT KODAK_262 = 262 JPEGXR_NDPI = 22610 """JPEG XR (Hammatsu NDPI).""" NEXT = 32766 SONY_ARW = 32767 PACKED_RAW = 32769 SAMSUNG_SRW = 32770 CCIRLEW = 32771 # Word-aligned 1D Huffman compression SAMSUNG_SRW2 = 32772 PACKBITS = 32773 """PackBits, aka Macintosh RLE.""" THUNDERSCAN = 32809 IT8CTPAD = 32895 # TIFF/IT IT8LW = 32896 # TIFF/IT IT8MP = 32897 # TIFF/IT IT8BL = 32898 # TIFF/IT PIXARFILM = 32908 PIXARLOG = 32909 DEFLATE = 32946 DCS = 32947 APERIO_JP2000_YCBC = 33003 # Matrox libraries """JPEG 2000 YCbCr (Leica Aperio).""" JPEG_2000_LOSSY = 33004 """Lossy JPEG 2000 (Bio-Formats).""" APERIO_JP2000_RGB = 33005 # Kakadu libraries """JPEG 2000 RGB (Leica Aperio).""" ALT_JPEG = 33007 """JPEG (Bio-Formats).""" # PANASONIC_RAW1 = 34316 # PANASONIC_RAW2 = 34826 # PANASONIC_RAW3 = 34828 # PANASONIC_RAW4 = 34830 JBIG = 34661 SGILOG = 34676 # LogLuv32 SGILOG24 = 34677 LURADOC = 34692 # LuraWave JPEG2000 = 34712 """JPEG 2000.""" NIKON_NEF = 34713 JBIG2 = 34715 MDI_BINARY = 34718 # Microsoft Document Imaging MDI_PROGRESSIVE = 34719 # Microsoft Document Imaging MDI_VECTOR = 34720 # Microsoft Document Imaging LERC = 34887 """ESRI Limited Error Raster Compression.""" JPEG_LOSSY = 34892 # DNG LZMA = 34925 """Lempel-Ziv-Markov chain Algorithm.""" ZSTD_DEPRECATED = 34926 WEBP_DEPRECATED = 34927 PNG = 34933 # Objective Pathology Services """Portable Network Graphics (Zoomable Image File format).""" JPEGXR = 34934 """JPEG XR (Zoomable Image File format).""" JETRAW = 48124 """Jetraw by Dotphoton.""" ZSTD = 50000 """Zstandard.""" WEBP = 50001 """WebP.""" JPEGXL = 50002 # GDAL """JPEG XL.""" PIXTIFF = 50013 """ZLIB (Atalasoft).""" JPEGXL_DNG = 52546 """JPEG XL (DNG).""" EER_V0 = 65000 # FIXED82 Thermo Fisher Scientific EER_V1 = 65001 # FIXED72 Thermo Fisher Scientific EER_V2 = 65002 # VARIABLE Thermo Fisher Scientific # KODAK_DCR = 65000 # PENTAX_PEF = 65535 def __bool__(self) -> bool: return self > 1 class PREDICTOR(enum.IntEnum): """Values of Predictor tag. A mathematical operator that is applied to the image data before compression. """ NONE = 1 """No prediction scheme used (default).""" HORIZONTAL = 2 """Horizontal differencing.""" FLOATINGPOINT = 3 """Floating-point horizontal differencing.""" HORIZONTALX2 = 34892 # DNG HORIZONTALX4 = 34893 FLOATINGPOINTX2 = 34894 FLOATINGPOINTX4 = 34895 def __bool__(self) -> bool: return self > 1 class PHOTOMETRIC(enum.IntEnum): """Values of PhotometricInterpretation tag. The color space of the image. """ MINISWHITE = 0 """For bilevel and grayscale images, 0 is imaged as white.""" MINISBLACK = 1 """For bilevel and grayscale images, 0 is imaged as black.""" RGB = 2 """Chroma components are Red, Green, Blue.""" PALETTE = 3 """Single chroma component is index into colormap.""" MASK = 4 SEPARATED = 5 """Chroma components are Cyan, Magenta, Yellow, and Key (black).""" YCBCR = 6 """Chroma components are Luma, blue-difference, and red-difference.""" CIELAB = 8 ICCLAB = 9 ITULAB = 10 CFA = 32803 """Color Filter Array.""" LOGL = 32844 LOGLUV = 32845 LINEAR_RAW = 34892 DEPTH_MAP = 51177 # DNG 1.5 SEMANTIC_MASK = 52527 # DNG 1.6 class FILETYPE(enum.IntFlag): """Values of NewSubfileType tag. A general indication of the kind of the image. """ UNDEFINED = 0 """Image is full-resolution (default).""" REDUCEDIMAGE = 1 """Image is reduced-resolution version of another image.""" PAGE = 2 """Image is single page of multi-page image.""" MASK = 4 """Image is transparency mask for another image.""" MACRO = 8 # Aperio SVS, or DNG Depth map """Image is MACRO image (SVS) or depth map for another image (DNG).""" ENHANCED = 16 # DNG """Image contains enhanced image (DNG).""" DNG = 65536 # 65537: Alternative, 65540: Semantic mask class OFILETYPE(enum.IntEnum): """Values of deprecated SubfileType tag.""" UNDEFINED = 0 IMAGE = 1 # full-resolution image REDUCEDIMAGE = 2 # reduced-resolution image PAGE = 3 # single page of multi-page image class FILLORDER(enum.IntEnum): """Values of FillOrder tag. The logical order of bits within a byte. """ MSB2LSB = 1 """Pixel values are stored in higher-order bits of byte (default).""" LSB2MSB = 2 """Pixels values are stored in lower-order bits of byte.""" class ORIENTATION(enum.IntEnum): """Values of Orientation tag. The orientation of the image with respect to the rows and columns. """ TOPLEFT = 1 # default TOPRIGHT = 2 BOTRIGHT = 3 BOTLEFT = 4 LEFTTOP = 5 RIGHTTOP = 6 RIGHTBOT = 7 LEFTBOT = 8 class PLANARCONFIG(enum.IntEnum): """Values of PlanarConfiguration tag. Specifies how components of each pixel are stored. """ CONTIG = 1 """Chunky, component values are stored contiguously (default).""" SEPARATE = 2 """Planar, component values are stored in separate planes.""" class RESUNIT(enum.IntEnum): """Values of ResolutionUnit tag. The unit of measurement for XResolution and YResolution. """ NONE = 1 """No absolute unit of measurement.""" INCH = 2 """Inch (default).""" CENTIMETER = 3 """Centimeter.""" MILLIMETER = 4 """Millimeter (DNG).""" MICROMETER = 5 """Micrometer (DNG).""" def __bool__(self) -> bool: return self > 1 class EXTRASAMPLE(enum.IntEnum): """Values of ExtraSamples tag. Interpretation of extra components in a pixel. """ UNSPECIFIED = 0 """Unspecified data.""" ASSOCALPHA = 1 """Associated alpha data with premultiplied color.""" UNASSALPHA = 2 """Unassociated alpha data.""" class SAMPLEFORMAT(enum.IntEnum): """Values of SampleFormat tag. Data type of samples in a pixel. """ UINT = 1 """Unsigned integer.""" INT = 2 """Signed integer.""" IEEEFP = 3 """IEEE floating-point""" VOID = 4 """Undefined.""" COMPLEXINT = 5 """Complex integer.""" COMPLEXIEEEFP = 6 """Complex floating-point.""" class CHUNKMODE(enum.IntEnum): """ZarrStore chunk modes. Specifies how to chunk data in Zarr 2 stores. """ STRILE = 0 """Chunk is strip or tile.""" PLANE = 1 """Chunk is image plane.""" PAGE = 2 """Chunk is image in page.""" FILE = 3 """Chunk is image in file.""" # class THRESHOLD(enum.IntEnum): # BILEVEL = 1 # HALFTONE = 2 # ERRORDIFFUSE = 3 # # class GRAYRESPONSEUNIT(enum.IntEnum): # _10S = 1 # _100S = 2 # _1000S = 3 # _10000S = 4 # _100000S = 5 # # class COLORRESPONSEUNIT(enum.IntEnum): # _10S = 1 # _100S = 2 # _1000S = 3 # _10000S = 4 # _100000S = 5 # # class GROUP4OPT(enum.IntEnum): # UNCOMPRESSED = 2 class _TIFF: """Delay-loaded constants, accessible via :py:attr:`TIFF` instance.""" @cached_property def CLASSIC_LE(self) -> TiffFormat: """32-bit little-endian TIFF format.""" return TiffFormat( version=42, byteorder='<', offsetsize=4, offsetformat=' TiffFormat: """32-bit big-endian TIFF format.""" return TiffFormat( version=42, byteorder='>', offsetsize=4, offsetformat='>I', tagnosize=2, tagnoformat='>H', tagsize=12, tagformat1='>HH', tagformat2='>I4s', tagoffsetthreshold=4, ) @cached_property def BIG_LE(self) -> TiffFormat: """64-bit little-endian TIFF format.""" return TiffFormat( version=43, byteorder='<', offsetsize=8, offsetformat=' TiffFormat: """64-bit big-endian TIFF format.""" return TiffFormat( version=43, byteorder='>', offsetsize=8, offsetformat='>Q', tagnosize=8, tagnoformat='>Q', tagsize=20, tagformat1='>HH', tagformat2='>Q8s', tagoffsetthreshold=8, ) @cached_property def NDPI_LE(self) -> TiffFormat: """32-bit little-endian TIFF format with 64-bit offsets.""" return TiffFormat( version=42, byteorder='<', offsetsize=8, # NDPI uses 8 bytes IFD and tag offsets offsetformat=' TiffTagRegistry: """Registry of TIFF tag codes and names from TIFF6, TIFF/EP, EXIF.""" # TODO: divide into baseline, exif, private, ... tags return TiffTagRegistry( ( (11, 'ProcessingSoftware'), (254, 'NewSubfileType'), (255, 'SubfileType'), (256, 'ImageWidth'), (257, 'ImageLength'), (258, 'BitsPerSample'), (259, 'Compression'), (262, 'PhotometricInterpretation'), (263, 'Thresholding'), (264, 'CellWidth'), (265, 'CellLength'), (266, 'FillOrder'), (269, 'DocumentName'), (270, 'ImageDescription'), (271, 'Make'), (272, 'Model'), (273, 'StripOffsets'), (274, 'Orientation'), (277, 'SamplesPerPixel'), (278, 'RowsPerStrip'), (279, 'StripByteCounts'), (280, 'MinSampleValue'), (281, 'MaxSampleValue'), (282, 'XResolution'), (283, 'YResolution'), (284, 'PlanarConfiguration'), (285, 'PageName'), (286, 'XPosition'), (287, 'YPosition'), (288, 'FreeOffsets'), (289, 'FreeByteCounts'), (290, 'GrayResponseUnit'), (291, 'GrayResponseCurve'), (292, 'T4Options'), (293, 'T6Options'), (296, 'ResolutionUnit'), (297, 'PageNumber'), (300, 'ColorResponseUnit'), (301, 'TransferFunction'), (305, 'Software'), (306, 'DateTime'), (315, 'Artist'), (316, 'HostComputer'), (317, 'Predictor'), (318, 'WhitePoint'), (319, 'PrimaryChromaticities'), (320, 'ColorMap'), (321, 'HalftoneHints'), (322, 'TileWidth'), (323, 'TileLength'), (324, 'TileOffsets'), (325, 'TileByteCounts'), (326, 'BadFaxLines'), (327, 'CleanFaxData'), (328, 'ConsecutiveBadFaxLines'), (330, 'SubIFDs'), (332, 'InkSet'), (333, 'InkNames'), (334, 'NumberOfInks'), (336, 'DotRange'), (337, 'TargetPrinter'), (338, 'ExtraSamples'), (339, 'SampleFormat'), (340, 'SMinSampleValue'), (341, 'SMaxSampleValue'), (342, 'TransferRange'), (343, 'ClipPath'), (344, 'XClipPathUnits'), (345, 'YClipPathUnits'), (346, 'Indexed'), (347, 'JPEGTables'), (351, 'OPIProxy'), (400, 'GlobalParametersIFD'), (401, 'ProfileType'), (402, 'FaxProfile'), (403, 'CodingMethods'), (404, 'VersionYear'), (405, 'ModeNumber'), (433, 'Decode'), (434, 'DefaultImageColor'), (435, 'T82Options'), (437, 'JPEGTables'), # 347 (512, 'JPEGProc'), (513, 'JPEGInterchangeFormat'), (514, 'JPEGInterchangeFormatLength'), (515, 'JPEGRestartInterval'), (517, 'JPEGLosslessPredictors'), (518, 'JPEGPointTransforms'), (519, 'JPEGQTables'), (520, 'JPEGDCTables'), (521, 'JPEGACTables'), (529, 'YCbCrCoefficients'), (530, 'YCbCrSubSampling'), (531, 'YCbCrPositioning'), (532, 'ReferenceBlackWhite'), (559, 'StripRowCounts'), (700, 'XMP'), # XMLPacket (769, 'GDIGamma'), # GDI+ (770, 'ICCProfileDescriptor'), # GDI+ (771, 'SRGBRenderingIntent'), # GDI+ (800, 'ImageTitle'), # GDI+ (907, 'SiffCompress'), # https://github.com/MaimonLab/SiffPy (999, 'USPTO_Miscellaneous'), (4864, 'AndorId'), # TODO, Andor Technology 4864 - 5030 (4869, 'AndorTemperature'), (4876, 'AndorExposureTime'), (4878, 'AndorKineticCycleTime'), (4879, 'AndorAccumulations'), (4881, 'AndorAcquisitionCycleTime'), (4882, 'AndorReadoutTime'), (4884, 'AndorPhotonCounting'), (4885, 'AndorEmDacLevel'), (4890, 'AndorFrames'), (4896, 'AndorHorizontalFlip'), (4897, 'AndorVerticalFlip'), (4898, 'AndorClockwise'), (4899, 'AndorCounterClockwise'), (4904, 'AndorVerticalClockVoltage'), (4905, 'AndorVerticalShiftSpeed'), (4907, 'AndorPreAmpSetting'), (4908, 'AndorCameraSerial'), (4911, 'AndorActualTemperature'), (4912, 'AndorBaselineClamp'), (4913, 'AndorPrescans'), (4914, 'AndorModel'), (4915, 'AndorChipSizeX'), (4916, 'AndorChipSizeY'), (4944, 'AndorBaselineOffset'), (4966, 'AndorSoftwareVersion'), (18246, 'Rating'), (18247, 'XP_DIP_XML'), (18248, 'StitchInfo'), (18249, 'RatingPercent'), (20481, 'ResolutionXUnit'), # GDI+ (20482, 'ResolutionYUnit'), # GDI+ (20483, 'ResolutionXLengthUnit'), # GDI+ (20484, 'ResolutionYLengthUnit'), # GDI+ (20485, 'PrintFlags'), # GDI+ (20486, 'PrintFlagsVersion'), # GDI+ (20487, 'PrintFlagsCrop'), # GDI+ (20488, 'PrintFlagsBleedWidth'), # GDI+ (20489, 'PrintFlagsBleedWidthScale'), # GDI+ (20490, 'HalftoneLPI'), # GDI+ (20491, 'HalftoneLPIUnit'), # GDI+ (20492, 'HalftoneDegree'), # GDI+ (20493, 'HalftoneShape'), # GDI+ (20494, 'HalftoneMisc'), # GDI+ (20495, 'HalftoneScreen'), # GDI+ (20496, 'JPEGQuality'), # GDI+ (20497, 'GridSize'), # GDI+ (20498, 'ThumbnailFormat'), # GDI+ (20499, 'ThumbnailWidth'), # GDI+ (20500, 'ThumbnailHeight'), # GDI+ (20501, 'ThumbnailColorDepth'), # GDI+ (20502, 'ThumbnailPlanes'), # GDI+ (20503, 'ThumbnailRawBytes'), # GDI+ (20504, 'ThumbnailSize'), # GDI+ (20505, 'ThumbnailCompressedSize'), # GDI+ (20506, 'ColorTransferFunction'), # GDI+ (20507, 'ThumbnailData'), (20512, 'ThumbnailImageWidth'), # GDI+ (20513, 'ThumbnailImageHeight'), # GDI+ (20514, 'ThumbnailBitsPerSample'), # GDI+ (20515, 'ThumbnailCompression'), (20516, 'ThumbnailPhotometricInterp'), # GDI+ (20517, 'ThumbnailImageDescription'), # GDI+ (20518, 'ThumbnailEquipMake'), # GDI+ (20519, 'ThumbnailEquipModel'), # GDI+ (20520, 'ThumbnailStripOffsets'), # GDI+ (20521, 'ThumbnailOrientation'), # GDI+ (20522, 'ThumbnailSamplesPerPixel'), # GDI+ (20523, 'ThumbnailRowsPerStrip'), # GDI+ (20524, 'ThumbnailStripBytesCount'), # GDI+ (20525, 'ThumbnailResolutionX'), (20526, 'ThumbnailResolutionY'), (20527, 'ThumbnailPlanarConfig'), # GDI+ (20528, 'ThumbnailResolutionUnit'), (20529, 'ThumbnailTransferFunction'), (20530, 'ThumbnailSoftwareUsed'), # GDI+ (20531, 'ThumbnailDateTime'), # GDI+ (20532, 'ThumbnailArtist'), # GDI+ (20533, 'ThumbnailWhitePoint'), # GDI+ (20534, 'ThumbnailPrimaryChromaticities'), # GDI+ (20535, 'ThumbnailYCbCrCoefficients'), # GDI+ (20536, 'ThumbnailYCbCrSubsampling'), # GDI+ (20537, 'ThumbnailYCbCrPositioning'), (20538, 'ThumbnailRefBlackWhite'), # GDI+ (20539, 'ThumbnailCopyRight'), # GDI+ (20545, 'InteroperabilityIndex'), (20546, 'InteroperabilityVersion'), (20624, 'LuminanceTable'), (20625, 'ChrominanceTable'), (20736, 'FrameDelay'), # GDI+ (20737, 'LoopCount'), # GDI+ (20738, 'GlobalPalette'), # GDI+ (20739, 'IndexBackground'), # GDI+ (20740, 'IndexTransparent'), # GDI+ (20752, 'PixelUnit'), # GDI+ (20753, 'PixelPerUnitX'), # GDI+ (20754, 'PixelPerUnitY'), # GDI+ (20755, 'PaletteHistogram'), # GDI+ (28672, 'SonyRawFileType'), # Sony ARW (28722, 'VignettingCorrParams'), # Sony ARW (28725, 'ChromaticAberrationCorrParams'), # Sony ARW (28727, 'DistortionCorrParams'), # Sony ARW # Private tags >= 32768 (32781, 'ImageID'), (32931, 'WangTag1'), (32932, 'WangAnnotation'), (32933, 'WangTag3'), (32934, 'WangTag4'), (32953, 'ImageReferencePoints'), (32954, 'RegionXformTackPoint'), (32955, 'WarpQuadrilateral'), (32956, 'AffineTransformMat'), (32995, 'Matteing'), (32996, 'DataType'), # use SampleFormat (32997, 'ImageDepth'), (32998, 'TileDepth'), (33300, 'ImageFullWidth'), (33301, 'ImageFullLength'), (33302, 'TextureFormat'), (33303, 'TextureWrapModes'), (33304, 'FieldOfViewCotangent'), (33305, 'MatrixWorldToScreen'), (33306, 'MatrixWorldToCamera'), (33405, 'Model2'), (33421, 'CFARepeatPatternDim'), (33422, 'CFAPattern'), (33423, 'BatteryLevel'), (33424, 'KodakIFD'), (33434, 'ExposureTime'), (33437, 'FNumber'), (33432, 'Copyright'), (33445, 'MDFileTag'), (33446, 'MDScalePixel'), (33447, 'MDColorTable'), (33448, 'MDLabName'), (33449, 'MDSampleInfo'), (33450, 'MDPrepDate'), (33451, 'MDPrepTime'), (33452, 'MDFileUnits'), (33465, 'NiffRotation'), # NIFF (33466, 'NiffNavyCompression'), # NIFF (33467, 'NiffTileIndex'), # NIFF (33471, 'OlympusINI'), (33550, 'ModelPixelScaleTag'), (33560, 'OlympusSIS'), # see also 33471 and 34853 (33589, 'AdventScale'), (33590, 'AdventRevision'), (33628, 'UIC1tag'), # Metamorph Universal Imaging Corp STK (33629, 'UIC2tag'), (33630, 'UIC3tag'), (33631, 'UIC4tag'), (33723, 'IPTCNAA'), (33858, 'ExtendedTagsOffset'), # DEFF points IFD with tags (33918, 'IntergraphPacketData'), # INGRPacketDataTag (33919, 'IntergraphFlagRegisters'), # INGRFlagRegisters (33920, 'IntergraphMatrixTag'), # IrasBTransformationMatrix (33921, 'INGRReserved'), (33922, 'ModelTiepointTag'), (33923, 'LeicaMagic'), (34016, 'Site'), # 34016..34032 ANSI IT8 TIFF/IT (34017, 'ColorSequence'), (34018, 'IT8Header'), (34019, 'RasterPadding'), (34020, 'BitsPerRunLength'), (34021, 'BitsPerExtendedRunLength'), (34022, 'ColorTable'), (34023, 'ImageColorIndicator'), (34024, 'BackgroundColorIndicator'), (34025, 'ImageColorValue'), (34026, 'BackgroundColorValue'), (34027, 'PixelIntensityRange'), (34028, 'TransparencyIndicator'), (34029, 'ColorCharacterization'), (34030, 'HCUsage'), (34031, 'TrapIndicator'), (34032, 'CMYKEquivalent'), (34118, 'CZ_SEM'), # Zeiss SEM (34152, 'AFCP_IPTC'), (34232, 'PixelMagicJBIGOptions'), # EXIF, also TI FrameCount (34263, 'JPLCartoIFD'), (34122, 'IPLAB'), # number of images (34264, 'ModelTransformationTag'), (34306, 'WB_GRGBLevels'), # Leaf MOS (34310, 'LeafData'), (34361, 'MM_Header'), (34362, 'MM_Stamp'), (34363, 'MM_Unknown'), (34377, 'ImageResources'), # Photoshop (34386, 'MM_UserBlock'), (34412, 'CZ_LSMINFO'), (34665, 'ExifTag'), (34675, 'InterColorProfile'), # ICCProfile (34680, 'FEI_SFEG'), # (34682, 'FEI_HELIOS'), # (34683, 'FEI_TITAN'), # (34687, 'FXExtensions'), (34688, 'MultiProfiles'), (34689, 'SharedData'), (34690, 'T88Options'), (34710, 'MarCCD'), # offset to MarCCD header (34732, 'ImageLayer'), (34735, 'GeoKeyDirectoryTag'), (34736, 'GeoDoubleParamsTag'), (34737, 'GeoAsciiParamsTag'), (34750, 'JBIGOptions'), (34821, 'PIXTIFF'), # ? Pixel Translations Inc (34850, 'ExposureProgram'), (34852, 'SpectralSensitivity'), (34853, 'GPSTag'), # GPSIFD also OlympusSIS2 (34853, 'OlympusSIS2'), (34855, 'ISOSpeedRatings'), (34855, 'PhotographicSensitivity'), (34856, 'OECF'), # optoelectric conversion factor (34857, 'Interlace'), # TIFF/EP (34858, 'TimeZoneOffset'), # TIFF/EP (34859, 'SelfTimerMode'), # TIFF/EP (34864, 'SensitivityType'), (34865, 'StandardOutputSensitivity'), (34866, 'RecommendedExposureIndex'), (34867, 'ISOSpeed'), (34868, 'ISOSpeedLatitudeyyy'), (34869, 'ISOSpeedLatitudezzz'), (34908, 'HylaFAXFaxRecvParams'), (34909, 'HylaFAXFaxSubAddress'), (34910, 'HylaFAXFaxRecvTime'), (34911, 'FaxDcs'), (34929, 'FedexEDR'), (34954, 'LeafSubIFD'), (34959, 'Aphelion1'), (34960, 'Aphelion2'), (34961, 'AphelionInternal'), # ADCIS (36864, 'ExifVersion'), (36867, 'DateTimeOriginal'), (36868, 'DateTimeDigitized'), (36873, 'GooglePlusUploadCode'), (36880, 'OffsetTime'), (36881, 'OffsetTimeOriginal'), (36882, 'OffsetTimeDigitized'), # TODO, Pilatus/CHESS/TV6 36864..37120 conflicting with Exif (36864, 'TVX_Unknown'), (36865, 'TVX_NumExposure'), (36866, 'TVX_NumBackground'), (36867, 'TVX_ExposureTime'), (36868, 'TVX_BackgroundTime'), (36870, 'TVX_Unknown'), (36873, 'TVX_SubBpp'), (36874, 'TVX_SubWide'), (36875, 'TVX_SubHigh'), (36876, 'TVX_BlackLevel'), (36877, 'TVX_DarkCurrent'), (36878, 'TVX_ReadNoise'), (36879, 'TVX_DarkCurrentNoise'), (36880, 'TVX_BeamMonitor'), (37120, 'TVX_UserVariables'), # A/D values (37121, 'ComponentsConfiguration'), (37122, 'CompressedBitsPerPixel'), (37377, 'ShutterSpeedValue'), (37378, 'ApertureValue'), (37379, 'BrightnessValue'), (37380, 'ExposureBiasValue'), (37381, 'MaxApertureValue'), (37382, 'SubjectDistance'), (37383, 'MeteringMode'), (37384, 'LightSource'), (37385, 'Flash'), (37386, 'FocalLength'), (37387, 'FlashEnergy'), # TIFF/EP (37388, 'SpatialFrequencyResponse'), # TIFF/EP (37389, 'Noise'), # TIFF/EP (37390, 'FocalPlaneXResolution'), # TIFF/EP (37391, 'FocalPlaneYResolution'), # TIFF/EP (37392, 'FocalPlaneResolutionUnit'), # TIFF/EP (37393, 'ImageNumber'), # TIFF/EP (37394, 'SecurityClassification'), # TIFF/EP (37395, 'ImageHistory'), # TIFF/EP (37396, 'SubjectLocation'), # TIFF/EP (37397, 'ExposureIndex'), # TIFF/EP (37398, 'TIFFEPStandardID'), # TIFF/EP (37399, 'SensingMethod'), # TIFF/EP (37434, 'CIP3DataFile'), (37435, 'CIP3Sheet'), (37436, 'CIP3Side'), (37439, 'StoNits'), (37500, 'MakerNote'), (37510, 'UserComment'), (37520, 'SubsecTime'), (37521, 'SubsecTimeOriginal'), (37522, 'SubsecTimeDigitized'), (37679, 'MODIText'), # Microsoft Office Document Imaging (37680, 'MODIOLEPropertySetStorage'), (37681, 'MODIPositioning'), (37701, 'AgilentBinary'), # private structure (37702, 'AgilentString'), # file description (37706, 'TVIPS'), # offset to TemData structure (37707, 'TVIPS1'), (37708, 'TVIPS2'), # same TemData structure as undefined (37724, 'ImageSourceData'), # Photoshop (37888, 'Temperature'), (37889, 'Humidity'), (37890, 'Pressure'), (37891, 'WaterDepth'), (37892, 'Acceleration'), (37893, 'CameraElevationAngle'), (40000, 'XPos'), # Janelia (40001, 'YPos'), (40002, 'ZPos'), (40001, 'MC_IpWinScal'), # Media Cybernetics (40001, 'RecipName'), # MS FAX (40002, 'RecipNumber'), (40003, 'SenderName'), (40004, 'Routing'), (40005, 'CallerId'), (40006, 'TSID'), (40007, 'CSID'), (40008, 'FaxTime'), (40100, 'MC_IdOld'), (40106, 'MC_Unknown'), (40965, 'InteroperabilityTag'), # InteropOffset (40091, 'XPTitle'), (40092, 'XPComment'), (40093, 'XPAuthor'), (40094, 'XPKeywords'), (40095, 'XPSubject'), (40960, 'FlashpixVersion'), (40961, 'ColorSpace'), (40962, 'PixelXDimension'), (40963, 'PixelYDimension'), (40964, 'RelatedSoundFile'), (40976, 'SamsungRawPointersOffset'), (40977, 'SamsungRawPointersLength'), (41217, 'SamsungRawByteOrder'), (41218, 'SamsungRawUnknown'), (41483, 'FlashEnergy'), (41484, 'SpatialFrequencyResponse'), (41485, 'Noise'), # 37389 (41486, 'FocalPlaneXResolution'), # 37390 (41487, 'FocalPlaneYResolution'), # 37391 (41488, 'FocalPlaneResolutionUnit'), # 37392 (41489, 'ImageNumber'), # 37393 (41490, 'SecurityClassification'), # 37394 (41491, 'ImageHistory'), # 37395 (41492, 'SubjectLocation'), # 37395 (41493, 'ExposureIndex '), # 37397 (41494, 'TIFF-EPStandardID'), (41495, 'SensingMethod'), # 37399 (41728, 'FileSource'), (41729, 'SceneType'), (41730, 'CFAPattern'), # 33422 (41985, 'CustomRendered'), (41986, 'ExposureMode'), (41987, 'WhiteBalance'), (41988, 'DigitalZoomRatio'), (41989, 'FocalLengthIn35mmFilm'), (41990, 'SceneCaptureType'), (41991, 'GainControl'), (41992, 'Contrast'), (41993, 'Saturation'), (41994, 'Sharpness'), (41995, 'DeviceSettingDescription'), (41996, 'SubjectDistanceRange'), (42016, 'ImageUniqueID'), (42032, 'CameraOwnerName'), (42033, 'BodySerialNumber'), (42034, 'LensSpecification'), (42035, 'LensMake'), (42036, 'LensModel'), (42037, 'LensSerialNumber'), (42080, 'CompositeImage'), (42081, 'SourceImageNumberCompositeImage'), (42082, 'SourceExposureTimesCompositeImage'), (42112, 'GDAL_METADATA'), (42113, 'GDAL_NODATA'), (42240, 'Gamma'), (43314, 'NIHImageHeader'), (44992, 'ExpandSoftware'), (44993, 'ExpandLens'), (44994, 'ExpandFilm'), (44995, 'ExpandFilterLens'), (44996, 'ExpandScanner'), (44997, 'ExpandFlashLamp'), (48129, 'PixelFormat'), # HDP and WDP (48130, 'Transformation'), (48131, 'Uncompressed'), (48132, 'ImageType'), (48256, 'ImageWidth'), # 256 (48257, 'ImageHeight'), (48258, 'WidthResolution'), (48259, 'HeightResolution'), (48320, 'ImageOffset'), (48321, 'ImageByteCount'), (48322, 'AlphaOffset'), (48323, 'AlphaByteCount'), (48324, 'ImageDataDiscard'), (48325, 'AlphaDataDiscard'), (50003, 'KodakAPP3'), (50215, 'OceScanjobDescription'), (50216, 'OceApplicationSelector'), (50217, 'OceIdentificationNumber'), (50218, 'OceImageLogicCharacteristics'), (50255, 'Annotations'), (50288, 'MC_Id'), # Media Cybernetics (50289, 'MC_XYPosition'), (50290, 'MC_ZPosition'), (50291, 'MC_XYCalibration'), (50292, 'MC_LensCharacteristics'), (50293, 'MC_ChannelName'), (50294, 'MC_ExcitationWavelength'), (50295, 'MC_TimeStamp'), (50296, 'MC_FrameProperties'), (50341, 'PrintImageMatching'), (50495, 'PCO_RAW'), # TODO, PCO CamWare (50547, 'OriginalFileName'), (50560, 'USPTO_OriginalContentType'), # US Patent Office (50561, 'USPTO_RotationCode'), (50648, 'CR2Unknown1'), (50649, 'CR2Unknown2'), (50656, 'CR2CFAPattern'), (50674, 'LercParameters'), # ESGI 50674 .. 50677 (50706, 'DNGVersion'), # DNG 50706 .. 51114 (50707, 'DNGBackwardVersion'), (50708, 'UniqueCameraModel'), (50709, 'LocalizedCameraModel'), (50710, 'CFAPlaneColor'), (50711, 'CFALayout'), (50712, 'LinearizationTable'), (50713, 'BlackLevelRepeatDim'), (50714, 'BlackLevel'), (50715, 'BlackLevelDeltaH'), (50716, 'BlackLevelDeltaV'), (50717, 'WhiteLevel'), (50718, 'DefaultScale'), (50719, 'DefaultCropOrigin'), (50720, 'DefaultCropSize'), (50721, 'ColorMatrix1'), (50722, 'ColorMatrix2'), (50723, 'CameraCalibration1'), (50724, 'CameraCalibration2'), (50725, 'ReductionMatrix1'), (50726, 'ReductionMatrix2'), (50727, 'AnalogBalance'), (50728, 'AsShotNeutral'), (50729, 'AsShotWhiteXY'), (50730, 'BaselineExposure'), (50731, 'BaselineNoise'), (50732, 'BaselineSharpness'), (50733, 'BayerGreenSplit'), (50734, 'LinearResponseLimit'), (50735, 'CameraSerialNumber'), (50736, 'LensInfo'), (50737, 'ChromaBlurRadius'), (50738, 'AntiAliasStrength'), (50739, 'ShadowScale'), (50740, 'DNGPrivateData'), (50741, 'MakerNoteSafety'), (50752, 'RawImageSegmentation'), (50778, 'CalibrationIlluminant1'), (50779, 'CalibrationIlluminant2'), (50780, 'BestQualityScale'), (50781, 'RawDataUniqueID'), (50784, 'AliasLayerMetadata'), (50827, 'OriginalRawFileName'), (50828, 'OriginalRawFileData'), (50829, 'ActiveArea'), (50830, 'MaskedAreas'), (50831, 'AsShotICCProfile'), (50832, 'AsShotPreProfileMatrix'), (50833, 'CurrentICCProfile'), (50834, 'CurrentPreProfileMatrix'), (50838, 'IJMetadataByteCounts'), (50839, 'IJMetadata'), (50844, 'RPCCoefficientTag'), (50879, 'ColorimetricReference'), (50885, 'SRawType'), (50898, 'PanasonicTitle'), (50899, 'PanasonicTitle2'), (50908, 'RSID'), # DGIWG (50909, 'GEO_METADATA'), # DGIWG XML (50931, 'CameraCalibrationSignature'), (50932, 'ProfileCalibrationSignature'), (50933, 'ProfileIFD'), # EXTRACAMERAPROFILES (50934, 'AsShotProfileName'), (50935, 'NoiseReductionApplied'), (50936, 'ProfileName'), (50937, 'ProfileHueSatMapDims'), (50938, 'ProfileHueSatMapData1'), (50939, 'ProfileHueSatMapData2'), (50940, 'ProfileToneCurve'), (50941, 'ProfileEmbedPolicy'), (50942, 'ProfileCopyright'), (50964, 'ForwardMatrix1'), (50965, 'ForwardMatrix2'), (50966, 'PreviewApplicationName'), (50967, 'PreviewApplicationVersion'), (50968, 'PreviewSettingsName'), (50969, 'PreviewSettingsDigest'), (50970, 'PreviewColorSpace'), (50971, 'PreviewDateTime'), (50972, 'RawImageDigest'), (50973, 'OriginalRawFileDigest'), (50974, 'SubTileBlockSize'), (50975, 'RowInterleaveFactor'), (50981, 'ProfileLookTableDims'), (50982, 'ProfileLookTableData'), (51008, 'OpcodeList1'), (51009, 'OpcodeList2'), (51022, 'OpcodeList3'), (51023, 'FibicsXML'), # (51041, 'NoiseProfile'), (51043, 'TimeCodes'), (51044, 'FrameRate'), (51058, 'TStop'), (51081, 'ReelName'), (51089, 'OriginalDefaultFinalSize'), (51090, 'OriginalBestQualitySize'), (51091, 'OriginalDefaultCropSize'), (51105, 'CameraLabel'), (51107, 'ProfileHueSatMapEncoding'), (51108, 'ProfileLookTableEncoding'), (51109, 'BaselineExposureOffset'), (51110, 'DefaultBlackRender'), (51111, 'NewRawImageDigest'), (51112, 'RawToPreviewGain'), (51113, 'CacheBlob'), (51114, 'CacheVersion'), (51123, 'MicroManagerMetadata'), (51125, 'DefaultUserCrop'), (51159, 'ZIFmetadata'), # Objective Pathology Services (51160, 'ZIFannotations'), # Objective Pathology Services (51177, 'DepthFormat'), (51178, 'DepthNear'), (51179, 'DepthFar'), (51180, 'DepthUnits'), (51181, 'DepthMeasureType'), (51182, 'EnhanceParams'), (52525, 'ProfileGainTableMap'), # DNG 1.6 (52526, 'SemanticName'), # DNG 1.6 (52528, 'SemanticInstanceID'), # DNG 1.6 (52536, 'MaskSubArea'), # DNG 1.6 (52543, 'RGBTables'), # DNG 1.6 (52529, 'CalibrationIlluminant3'), # DNG 1.6 (52531, 'ColorMatrix3'), # DNG 1.6 (52530, 'CameraCalibration3'), # DNG 1.6 (52538, 'ReductionMatrix3'), # DNG 1.6 (52537, 'ProfileHueSatMapData3'), # DNG 1.6 (52532, 'ForwardMatrix3'), # DNG 1.6 (52533, 'IlluminantData1'), # DNG 1.6 (52534, 'IlluminantData2'), # DNG 1.6 (53535, 'IlluminantData3'), # DNG 1.6 (52544, 'ProfileGainTableMap2'), # DNG 1.7 (52547, 'ColumnInterleaveFactor'), # DNG 1.7 (52548, 'ImageSequenceInfo'), # DNG 1.7 (52550, 'ImageStats'), # DNG 1.7 (52551, 'ProfileDynamicRange'), # DNG 1.7 (52552, 'ProfileGroupName'), # DNG 1.7 (52553, 'JXLDistance'), # DNG 1.7 (52554, 'JXLEffort'), # DNG 1.7 (52555, 'JXLDecodeSpeed'), # DNG 1.7 (55000, 'AperioUnknown55000'), (55001, 'AperioMagnification'), (55002, 'AperioMPP'), (55003, 'AperioScanScopeID'), (55004, 'AperioDate'), (59932, 'Padding'), (59933, 'OffsetSchema'), # Reusable Tags 65000-65535 # (65000, 'DimapDocumentXML'), # EER metadata: # (65001, 'AcquisitionMetadata'), # (65002, 'FrameMetadata'), # (65006, 'ImageMetadata'), # (65007, 'PosSkipBits'), # (65008, 'HorzSubBits'), # (65009, 'VertSubBits'), # Photoshop Camera RAW EXIF tags: # (65000, 'OwnerName'), # (65001, 'SerialNumber'), # (65002, 'Lens'), # (65024, 'KodakKDCPrivateIFD'), # (65100, 'RawFile'), # (65101, 'Converter'), # (65102, 'WhiteBalance'), # (65105, 'Exposure'), # (65106, 'Shadows'), # (65107, 'Brightness'), # (65108, 'Contrast'), # (65109, 'Saturation'), # (65110, 'Sharpness'), # (65111, 'Smoothness'), # (65112, 'MoireFilter'), (65200, 'FlexXML'), ) ) @cached_property def TAG_READERS( self, ) -> dict[int, Callable[[FileHandle, ByteOrder, int, int, int], Any]]: # map tag codes to import functions return { 301: read_colormap, 320: read_colormap, # 700: read_bytes, # read_utf8, # 34377: read_bytes, 33723: read_bytes, # 34675: read_bytes, 33628: read_uic1tag, # Universal Imaging Corp STK 33629: read_uic2tag, 33630: read_uic3tag, 33631: read_uic4tag, 34118: read_cz_sem, # Carl Zeiss SEM 34361: read_mm_header, # Olympus FluoView 34362: read_mm_stamp, 34363: read_numpy, # MM_Unknown 34386: read_numpy, # MM_UserBlock 34412: read_cz_lsminfo, # Carl Zeiss LSM 34680: read_fei_metadata, # S-FEG 34682: read_fei_metadata, # Helios NanoLab 37706: read_tvips_header, # TVIPS EMMENU 37724: read_bytes, # ImageSourceData 33923: read_bytes, # read_leica_magic 43314: read_nih_image_header, # 40001: read_bytes, 40100: read_bytes, 50288: read_bytes, 50296: read_bytes, 50839: read_bytes, 51123: read_json, 33471: read_sis_ini, 33560: read_sis, 34665: read_exif_ifd, 34853: read_gps_ifd, # conflicts with OlympusSIS 40965: read_interoperability_ifd, 65426: read_numpy, # NDPI McuStarts 65432: read_numpy, # NDPI McuStartsHighBytes 65439: read_numpy, # NDPI unknown 65459: read_bytes, # NDPI bytes, not string } @cached_property def TAG_LOAD(self) -> frozenset[int]: # tags whose values are not delay loaded return frozenset( ( 258, # BitsPerSample 270, # ImageDescription 273, # StripOffsets 277, # SamplesPerPixel 279, # StripByteCounts 282, # XResolution 283, # YResolution # 301, # TransferFunction 305, # Software # 306, # DateTime # 320, # ColorMap 324, # TileOffsets 325, # TileByteCounts 330, # SubIFDs 338, # ExtraSamples 339, # SampleFormat 347, # JPEGTables 513, # JPEGInterchangeFormat 514, # JPEGInterchangeFormatLength 530, # YCbCrSubSampling 33628, # UIC1tag 42113, # GDAL_NODATA 50838, # IJMetadataByteCounts 50839, # IJMetadata ) ) @cached_property def TAG_FILTERED(self) -> frozenset[int]: # tags filtered from extratags in :py:meth:`TiffWriter.write` return frozenset( ( 256, # ImageWidth 257, # ImageLength 258, # BitsPerSample 259, # Compression 262, # PhotometricInterpretation 266, # FillOrder 273, # StripOffsets 277, # SamplesPerPixel 278, # RowsPerStrip 279, # StripByteCounts 284, # PlanarConfiguration 317, # Predictor 322, # TileWidth 323, # TileLength 324, # TileOffsets 325, # TileByteCounts 330, # SubIFDs, 338, # ExtraSamples 339, # SampleFormat 400, # GlobalParametersIFD 32997, # ImageDepth 32998, # TileDepth 34665, # ExifTag 34853, # GPSTag 40965, # InteroperabilityTag ) ) @cached_property def TAG_TUPLE(self) -> frozenset[int]: # tags whose values must be stored as tuples return frozenset( ( 273, 279, 282, 283, 324, 325, 330, 338, 513, 514, 530, 531, 34736, 50838, ) ) @cached_property def TAG_ATTRIBUTES(self) -> dict[int, str]: # map tag codes to TiffPage attribute names return { 254: 'subfiletype', 256: 'imagewidth', 257: 'imagelength', # 258: 'bitspersample', # set manually 259: 'compression', 262: 'photometric', 266: 'fillorder', 270: 'description', 277: 'samplesperpixel', 278: 'rowsperstrip', 284: 'planarconfig', # 301: 'transferfunction', # delay load 305: 'software', # 320: 'colormap', # delay load 317: 'predictor', 322: 'tilewidth', 323: 'tilelength', 330: 'subifds', 338: 'extrasamples', # 339: 'sampleformat', # set manually 347: 'jpegtables', 530: 'subsampling', 32997: 'imagedepth', 32998: 'tiledepth', } @cached_property def TAG_ENUM(self) -> dict[int, type[enum.Enum]]: # map tag codes to Enums return { 254: FILETYPE, 255: OFILETYPE, 259: COMPRESSION, 262: PHOTOMETRIC, # 263: THRESHOLD, 266: FILLORDER, 274: ORIENTATION, 284: PLANARCONFIG, # 290: GRAYRESPONSEUNIT, # 292: GROUP3OPT # 293: GROUP4OPT 296: RESUNIT, # 300: COLORRESPONSEUNIT, 317: PREDICTOR, 338: EXTRASAMPLE, 339: SAMPLEFORMAT, # 512: JPEGPROC # 531: YCBCRPOSITION } @cached_property def EXIF_TAGS(self) -> TiffTagRegistry: """Registry of EXIF tags, including private Photoshop Camera RAW.""" # 65000 - 65112 Photoshop Camera RAW EXIF tags tags = TiffTagRegistry( ( (65000, 'OwnerName'), (65001, 'SerialNumber'), (65002, 'Lens'), (65100, 'RawFile'), (65101, 'Converter'), (65102, 'WhiteBalance'), (65105, 'Exposure'), (65106, 'Shadows'), (65107, 'Brightness'), (65108, 'Contrast'), (65109, 'Saturation'), (65110, 'Sharpness'), (65111, 'Smoothness'), (65112, 'MoireFilter'), ) ) tags.update(TIFF.TAGS) return tags @cached_property def NDPI_TAGS(self) -> TiffTagRegistry: """Registry of private TIFF tags for Hamamatsu NDPI (65420-65458).""" # TODO: obtain specification return TiffTagRegistry( ( (65324, 'OffsetHighBytes'), (65325, 'ByteCountHighBytes'), (65420, 'FileFormat'), (65421, 'Magnification'), # SourceLens (65422, 'XOffsetFromSlideCenter'), (65423, 'YOffsetFromSlideCenter'), (65424, 'ZOffsetFromSlideCenter'), # FocalPlane (65425, 'TissueIndex'), (65426, 'McuStarts'), (65427, 'SlideLabel'), (65428, 'AuthCode'), # ? (65429, '65429'), (65430, '65430'), (65431, '65431'), (65432, 'McuStartsHighBytes'), (65433, '65433'), (65434, 'Fluorescence'), # FilterSetName, Channel (65435, 'ExposureRatio'), (65436, 'RedMultiplier'), (65437, 'GreenMultiplier'), (65438, 'BlueMultiplier'), (65439, 'FocusPoints'), (65440, 'FocusPointRegions'), (65441, 'CaptureMode'), (65442, 'ScannerSerialNumber'), (65443, '65443'), (65444, 'JpegQuality'), (65445, 'RefocusInterval'), (65446, 'FocusOffset'), (65447, 'BlankLines'), (65448, 'FirmwareVersion'), (65449, 'Comments'), # PropertyMap, CalibrationInfo (65450, 'LabelObscured'), (65451, 'Wavelength'), (65452, '65452'), (65453, 'LampAge'), (65454, 'ExposureTime'), (65455, 'FocusTime'), (65456, 'ScanTime'), (65457, 'WriteTime'), (65458, 'FullyAutoFocus'), (65500, 'DefaultGamma'), ) ) @cached_property def GPS_TAGS(self) -> TiffTagRegistry: """Registry of GPS IFD tags.""" return TiffTagRegistry( ( (0, 'GPSVersionID'), (1, 'GPSLatitudeRef'), (2, 'GPSLatitude'), (3, 'GPSLongitudeRef'), (4, 'GPSLongitude'), (5, 'GPSAltitudeRef'), (6, 'GPSAltitude'), (7, 'GPSTimeStamp'), (8, 'GPSSatellites'), (9, 'GPSStatus'), (10, 'GPSMeasureMode'), (11, 'GPSDOP'), (12, 'GPSSpeedRef'), (13, 'GPSSpeed'), (14, 'GPSTrackRef'), (15, 'GPSTrack'), (16, 'GPSImgDirectionRef'), (17, 'GPSImgDirection'), (18, 'GPSMapDatum'), (19, 'GPSDestLatitudeRef'), (20, 'GPSDestLatitude'), (21, 'GPSDestLongitudeRef'), (22, 'GPSDestLongitude'), (23, 'GPSDestBearingRef'), (24, 'GPSDestBearing'), (25, 'GPSDestDistanceRef'), (26, 'GPSDestDistance'), (27, 'GPSProcessingMethod'), (28, 'GPSAreaInformation'), (29, 'GPSDateStamp'), (30, 'GPSDifferential'), (31, 'GPSHPositioningError'), ) ) @cached_property def IOP_TAGS(self) -> TiffTagRegistry: """Registry of Interoperability IFD tags.""" return TiffTagRegistry( ( (1, 'InteroperabilityIndex'), (2, 'InteroperabilityVersion'), (4096, 'RelatedImageFileFormat'), (4097, 'RelatedImageWidth'), (4098, 'RelatedImageLength'), ) ) @cached_property def PHOTOMETRIC_SAMPLES(self) -> dict[int, int]: """Map :py:class:`PHOTOMETRIC` to number of photometric samples.""" return { 0: 1, # MINISWHITE 1: 1, # MINISBLACK 2: 3, # RGB 3: 1, # PALETTE 4: 1, # MASK 5: 4, # SEPARATED 6: 3, # YCBCR 8: 3, # CIELAB 9: 3, # ICCLAB 10: 3, # ITULAB 32803: 1, # CFA 32844: 1, # LOGL ? 32845: 3, # LOGLUV 34892: 3, # LINEAR_RAW ? 51177: 1, # DEPTH_MAP ? 52527: 1, # SEMANTIC_MASK ? } @cached_property def DATA_FORMATS(self) -> dict[int, str]: """Map :py:class:`DATATYPE` to Python struct formats.""" return { 1: '1B', 2: '1s', 3: '1H', 4: '1I', 5: '2I', 6: '1b', 7: '1B', 8: '1h', 9: '1i', 10: '2i', 11: '1f', 12: '1d', 13: '1I', # 14: '', # 15: '', 16: '1Q', 17: '1q', 18: '1Q', } @cached_property def DATA_DTYPES(self) -> dict[str, int]: """Map NumPy dtype to :py:class:`DATATYPE`.""" return { 'B': 1, 's': 2, 'H': 3, 'I': 4, '2I': 5, 'b': 6, 'h': 8, 'i': 9, '2i': 10, 'f': 11, 'd': 12, 'Q': 16, 'q': 17, } @cached_property def SAMPLE_DTYPES(self) -> dict[tuple[int, int | tuple[int, ...]], str]: """Map :py:class:`SAMPLEFORMAT` and BitsPerSample to NumPy dtype.""" return { # UINT (1, 1): '?', # bitmap (1, 2): 'B', (1, 3): 'B', (1, 4): 'B', (1, 5): 'B', (1, 6): 'B', (1, 7): 'B', (1, 8): 'B', (1, 9): 'H', (1, 10): 'H', (1, 11): 'H', (1, 12): 'H', (1, 13): 'H', (1, 14): 'H', (1, 15): 'H', (1, 16): 'H', (1, 17): 'I', (1, 18): 'I', (1, 19): 'I', (1, 20): 'I', (1, 21): 'I', (1, 22): 'I', (1, 23): 'I', (1, 24): 'I', (1, 25): 'I', (1, 26): 'I', (1, 27): 'I', (1, 28): 'I', (1, 29): 'I', (1, 30): 'I', (1, 31): 'I', (1, 32): 'I', (1, 64): 'Q', # VOID : treat as UINT (4, 1): '?', # bitmap (4, 2): 'B', (4, 3): 'B', (4, 4): 'B', (4, 5): 'B', (4, 6): 'B', (4, 7): 'B', (4, 8): 'B', (4, 9): 'H', (4, 10): 'H', (4, 11): 'H', (4, 12): 'H', (4, 13): 'H', (4, 14): 'H', (4, 15): 'H', (4, 16): 'H', (4, 17): 'I', (4, 18): 'I', (4, 19): 'I', (4, 20): 'I', (4, 21): 'I', (4, 22): 'I', (4, 23): 'I', (4, 24): 'I', (4, 25): 'I', (4, 26): 'I', (4, 27): 'I', (4, 28): 'I', (4, 29): 'I', (4, 30): 'I', (4, 31): 'I', (4, 32): 'I', (4, 64): 'Q', # INT (2, 8): 'b', (2, 16): 'h', (2, 32): 'i', (2, 64): 'q', # IEEEFP (3, 16): 'e', (3, 24): 'f', # float24 bit not supported by numpy (3, 32): 'f', (3, 64): 'd', # COMPLEXIEEEFP (6, 64): 'F', (6, 128): 'D', # RGB565 (1, (5, 6, 5)): 'B', # COMPLEXINT : not supported by numpy (5, 16): 'E', (5, 32): 'F', (5, 64): 'D', } @cached_property def PREDICTORS(self) -> Mapping[int, Callable[..., Any]]: """Map :py:class:`PREDICTOR` value to encode function.""" return PredictorCodec(True) @cached_property def UNPREDICTORS(self) -> Mapping[int, Callable[..., Any]]: """Map :py:class:`PREDICTOR` value to decode function.""" return PredictorCodec(False) @cached_property def COMPRESSORS(self) -> Mapping[int, Callable[..., Any]]: """Map :py:class:`COMPRESSION` value to compress function.""" return CompressionCodec(True) @cached_property def DECOMPRESSORS(self) -> Mapping[int, Callable[..., Any]]: """Map :py:class:`COMPRESSION` value to decompress function.""" return CompressionCodec(False) @cached_property def IMAGE_COMPRESSIONS(self) -> set[int]: # set of compression to encode/decode images # encode/decode preserves shape and dtype # cannot be used with predictors or fillorder return { 6, # jpeg 7, # jpeg 22610, # jpegxr 33003, # jpeg2k 33004, # jpeg2k 33005, # jpeg2k 33007, # alt_jpeg 34712, # jpeg2k 34892, # jpeg 34933, # png 34934, # jpegxr ZIF 48124, # jetraw 50001, # webp 50002, # jpegxl 52546, # jpegxl DNG } @cached_property def AXES_NAMES(self) -> dict[str, str]: """Map axes character codes to dimension names. - **X : width** (image width) - **Y : height** (image length) - **Z : depth** (image depth) - **S : sample** (color space and extra samples) - **I : sequence** (generic sequence of images, frames, planes, pages) - **T : time** (time series) - **C : channel** (acquisition path or emission wavelength) - **A : angle** (OME) - **P : phase** (OME. In LSM, **P** maps to **position**) - **R : tile** (OME. Region, position, or mosaic) - **H : lifetime** (OME. Histogram) - **E : lambda** (OME. Excitation wavelength) - **Q : other** (OME) - **L : exposure** (FluoView) - **V : event** (FluoView) - **M : mosaic** (LSM 6) - **J : column** (NDTiff) - **K : row** (NDTiff) There is no universal standard for dimension codes or names. This mapping mainly follows TIFF, OME-TIFF, ImageJ, LSM, and FluoView conventions. """ return { 'X': 'width', 'Y': 'height', 'Z': 'depth', 'S': 'sample', 'I': 'sequence', # 'F': 'file', 'T': 'time', 'C': 'channel', 'A': 'angle', 'P': 'phase', 'R': 'tile', 'H': 'lifetime', 'E': 'lambda', 'L': 'exposure', 'V': 'event', 'M': 'mosaic', 'Q': 'other', 'J': 'column', 'K': 'row', } @cached_property def AXES_CODES(self) -> dict[str, str]: """Map dimension names to axes character codes. Reverse mapping of :py:attr:`AXES_NAMES`. """ codes = {name: code for code, name in TIFF.AXES_NAMES.items()} codes['z'] = 'Z' # NDTiff codes['position'] = 'R' # NDTiff return codes @cached_property def GEO_KEYS(self) -> type[enum.IntEnum]: """:py:class:`geodb.GeoKeys`.""" try: from .geodb import GeoKeys except ImportError: class GeoKeys(enum.IntEnum): # type: ignore[no-redef] pass return GeoKeys @cached_property def GEO_CODES(self) -> dict[int, type[enum.IntEnum]]: """Map :py:class:`geodb.GeoKeys` to GeoTIFF codes.""" try: from .geodb import GEO_CODES except ImportError: GEO_CODES = {} return GEO_CODES @cached_property def PAGE_FLAGS(self) -> set[str]: # TiffFile and TiffPage 'is_\*' attributes exclude = { 'reduced', 'mask', 'final', 'memmappable', 'contiguous', 'tiled', 'subsampled', 'jfif', } return { a[3:] for a in dir(TiffPage) if a[:3] == 'is_' and a[3:] not in exclude } @cached_property def FILE_FLAGS(self) -> set[str]: # TiffFile 'is_\*' attributes exclude = {'bigtiff', 'appendable'} return { a[3:] for a in dir(TiffFile) if a[:3] == 'is_' and a[3:] not in exclude }.union(TIFF.PAGE_FLAGS) @property def FILE_PATTERNS(self) -> dict[str, str]: # predefined FileSequence patterns return { 'axes': r"""(?ix) # matches Olympus OIF and Leica TIFF series _?(?:(q|l|p|a|c|t|x|y|z|ch|tp)(\d{1,4})) _?(?:(q|l|p|a|c|t|x|y|z|ch|tp)(\d{1,4}))? _?(?:(q|l|p|a|c|t|x|y|z|ch|tp)(\d{1,4}))? _?(?:(q|l|p|a|c|t|x|y|z|ch|tp)(\d{1,4}))? _?(?:(q|l|p|a|c|t|x|y|z|ch|tp)(\d{1,4}))? _?(?:(q|l|p|a|c|t|x|y|z|ch|tp)(\d{1,4}))? _?(?:(q|l|p|a|c|t|x|y|z|ch|tp)(\d{1,4}))? """ } @property def FILE_EXTENSIONS(self) -> tuple[str, ...]: """Known TIFF file extensions.""" return ( 'tif', 'tiff', 'ome.tif', 'lsm', 'stk', 'qpi', 'pcoraw', 'qptiff', 'ptiff', 'ptif', 'gel', 'seq', 'svs', 'avs', 'scn', 'zif', 'ndpi', 'bif', 'tf8', 'tf2', 'btf', 'eer', ) @property def FILEOPEN_FILTER(self) -> list[tuple[str, str]]: # string for use in Windows File Open box return [ (f'{ext.upper()} files', f'*.{ext}') for ext in TIFF.FILE_EXTENSIONS ] + [('All files', '*')] @property def CZ_LSMINFO(self) -> list[tuple[str, str]]: # numpy data type of LSMINFO structure return [ ('MagicNumber', 'u4'), ('StructureSize', 'i4'), ('DimensionX', 'i4'), ('DimensionY', 'i4'), ('DimensionZ', 'i4'), ('DimensionChannels', 'i4'), ('DimensionTime', 'i4'), ('DataType', 'i4'), # DATATYPES ('ThumbnailX', 'i4'), ('ThumbnailY', 'i4'), ('VoxelSizeX', 'f8'), ('VoxelSizeY', 'f8'), ('VoxelSizeZ', 'f8'), ('OriginX', 'f8'), ('OriginY', 'f8'), ('OriginZ', 'f8'), ('ScanType', 'u2'), ('SpectralScan', 'u2'), ('TypeOfData', 'u4'), # TYPEOFDATA ('OffsetVectorOverlay', 'u4'), ('OffsetInputLut', 'u4'), ('OffsetOutputLut', 'u4'), ('OffsetChannelColors', 'u4'), ('TimeIntervall', 'f8'), ('OffsetChannelDataTypes', 'u4'), ('OffsetScanInformation', 'u4'), # SCANINFO ('OffsetKsData', 'u4'), ('OffsetTimeStamps', 'u4'), ('OffsetEventList', 'u4'), ('OffsetRoi', 'u4'), ('OffsetBleachRoi', 'u4'), ('OffsetNextRecording', 'u4'), # LSM 2.0 ends here ('DisplayAspectX', 'f8'), ('DisplayAspectY', 'f8'), ('DisplayAspectZ', 'f8'), ('DisplayAspectTime', 'f8'), ('OffsetMeanOfRoisOverlay', 'u4'), ('OffsetTopoIsolineOverlay', 'u4'), ('OffsetTopoProfileOverlay', 'u4'), ('OffsetLinescanOverlay', 'u4'), ('ToolbarFlags', 'u4'), ('OffsetChannelWavelength', 'u4'), ('OffsetChannelFactors', 'u4'), ('ObjectiveSphereCorrection', 'f8'), ('OffsetUnmixParameters', 'u4'), # LSM 3.2, 4.0 end here ('OffsetAcquisitionParameters', 'u4'), ('OffsetCharacteristics', 'u4'), ('OffsetPalette', 'u4'), ('TimeDifferenceX', 'f8'), ('TimeDifferenceY', 'f8'), ('TimeDifferenceZ', 'f8'), ('InternalUse1', 'u4'), ('DimensionP', 'i4'), ('DimensionM', 'i4'), ('DimensionsReserved', '16i4'), ('OffsetTilePositions', 'u4'), ('', '9u4'), # Reserved ('OffsetPositions', 'u4'), # ('', '21u4'), # must be 0 ] @property def CZ_LSMINFO_READERS( self, ) -> dict[str, Callable[[FileHandle], Any] | None]: # import functions for CZ_LSMINFO sub-records # TODO: read more CZ_LSMINFO sub-records return { 'ScanInformation': read_lsm_scaninfo, 'TimeStamps': read_lsm_timestamps, 'EventList': read_lsm_eventlist, 'ChannelColors': read_lsm_channelcolors, 'Positions': read_lsm_positions, 'TilePositions': read_lsm_positions, 'VectorOverlay': None, 'InputLut': read_lsm_lookuptable, 'OutputLut': read_lsm_lookuptable, 'TimeIntervall': None, 'ChannelDataTypes': read_lsm_channeldatatypes, 'KsData': None, 'Roi': None, 'BleachRoi': None, 'NextRecording': None, # read with TiffFile(fh, offset=) 'MeanOfRoisOverlay': None, 'TopoIsolineOverlay': None, 'TopoProfileOverlay': None, 'ChannelWavelength': read_lsm_channelwavelength, 'SphereCorrection': None, 'ChannelFactors': None, 'UnmixParameters': None, 'AcquisitionParameters': None, 'Characteristics': None, } @property def CZ_LSMINFO_SCANTYPE(self) -> dict[int, str]: # map CZ_LSMINFO.ScanType to dimension order return { 0: 'ZCYX', # Stack, normal x-y-z-scan 1: 'CZX', # Z-Scan, x-z-plane 2: 'CTX', # Line or Time Series Line 3: 'TCYX', # Time Series Plane, x-y 4: 'TCZX', # Time Series z-Scan, x-z 5: 'CTX', # Time Series Mean-of-ROIs 6: 'TZCYX', # Time Series Stack, x-y-z 7: 'TZCYX', # TODO: Spline Scan 8: 'CZX', # Spline Plane, x-z 9: 'TCZX', # Time Series Spline Plane, x-z 10: 'CTX', # Point or Time Series Point } @property def CZ_LSMINFO_DIMENSIONS(self) -> dict[str, str]: # map dimension codes to CZ_LSMINFO attribute return { 'X': 'DimensionX', 'Y': 'DimensionY', 'Z': 'DimensionZ', 'C': 'DimensionChannels', 'T': 'DimensionTime', 'P': 'DimensionP', 'M': 'DimensionM', } @property def CZ_LSMINFO_DATATYPES(self) -> dict[int, str]: # description of CZ_LSMINFO.DataType return { 0: 'varying data types', 1: '8 bit unsigned integer', 2: '12 bit unsigned integer', 5: '32 bit float', } @property def CZ_LSMINFO_TYPEOFDATA(self) -> dict[int, str]: # description of CZ_LSMINFO.TypeOfData return { 0: 'Original scan data', 1: 'Calculated data', 2: '3D reconstruction', 3: 'Topography height map', } @property def CZ_LSMINFO_SCANINFO_ARRAYS(self) -> dict[int, str]: return { 0x20000000: 'Tracks', 0x30000000: 'Lasers', 0x60000000: 'DetectionChannels', 0x80000000: 'IlluminationChannels', 0xA0000000: 'BeamSplitters', 0xC0000000: 'DataChannels', 0x11000000: 'Timers', 0x13000000: 'Markers', } @property def CZ_LSMINFO_SCANINFO_STRUCTS(self) -> dict[int, str]: return { # 0x10000000: 'Recording', 0x40000000: 'Track', 0x50000000: 'Laser', 0x70000000: 'DetectionChannel', 0x90000000: 'IlluminationChannel', 0xB0000000: 'BeamSplitter', 0xD0000000: 'DataChannel', 0x12000000: 'Timer', 0x14000000: 'Marker', } @property def CZ_LSMINFO_SCANINFO_ATTRIBUTES(self) -> dict[int, str]: return { # Recording 0x10000001: 'Name', 0x10000002: 'Description', 0x10000003: 'Notes', 0x10000004: 'Objective', 0x10000005: 'ProcessingSummary', 0x10000006: 'SpecialScanMode', 0x10000007: 'ScanType', 0x10000008: 'ScanMode', 0x10000009: 'NumberOfStacks', 0x1000000A: 'LinesPerPlane', 0x1000000B: 'SamplesPerLine', 0x1000000C: 'PlanesPerVolume', 0x1000000D: 'ImagesWidth', 0x1000000E: 'ImagesHeight', 0x1000000F: 'ImagesNumberPlanes', 0x10000010: 'ImagesNumberStacks', 0x10000011: 'ImagesNumberChannels', 0x10000012: 'LinscanXySize', 0x10000013: 'ScanDirection', 0x10000014: 'TimeSeries', 0x10000015: 'OriginalScanData', 0x10000016: 'ZoomX', 0x10000017: 'ZoomY', 0x10000018: 'ZoomZ', 0x10000019: 'Sample0X', 0x1000001A: 'Sample0Y', 0x1000001B: 'Sample0Z', 0x1000001C: 'SampleSpacing', 0x1000001D: 'LineSpacing', 0x1000001E: 'PlaneSpacing', 0x1000001F: 'PlaneWidth', 0x10000020: 'PlaneHeight', 0x10000021: 'VolumeDepth', 0x10000023: 'Nutation', 0x10000034: 'Rotation', 0x10000035: 'Precession', 0x10000036: 'Sample0time', 0x10000037: 'StartScanTriggerIn', 0x10000038: 'StartScanTriggerOut', 0x10000039: 'StartScanEvent', 0x10000040: 'StartScanTime', 0x10000041: 'StopScanTriggerIn', 0x10000042: 'StopScanTriggerOut', 0x10000043: 'StopScanEvent', 0x10000044: 'StopScanTime', 0x10000045: 'UseRois', 0x10000046: 'UseReducedMemoryRois', 0x10000047: 'User', 0x10000048: 'UseBcCorrection', 0x10000049: 'PositionBcCorrection1', 0x10000050: 'PositionBcCorrection2', 0x10000051: 'InterpolationY', 0x10000052: 'CameraBinning', 0x10000053: 'CameraSupersampling', 0x10000054: 'CameraFrameWidth', 0x10000055: 'CameraFrameHeight', 0x10000056: 'CameraOffsetX', 0x10000057: 'CameraOffsetY', 0x10000059: 'RtBinning', 0x1000005A: 'RtFrameWidth', 0x1000005B: 'RtFrameHeight', 0x1000005C: 'RtRegionWidth', 0x1000005D: 'RtRegionHeight', 0x1000005E: 'RtOffsetX', 0x1000005F: 'RtOffsetY', 0x10000060: 'RtZoom', 0x10000061: 'RtLinePeriod', 0x10000062: 'Prescan', 0x10000063: 'ScanDirectionZ', # Track 0x40000001: 'MultiplexType', # 0 After Line; 1 After Frame 0x40000002: 'MultiplexOrder', 0x40000003: 'SamplingMode', # 0 Sample; 1 Line Avg; 2 Frame Avg 0x40000004: 'SamplingMethod', # 1 Mean; 2 Sum 0x40000005: 'SamplingNumber', 0x40000006: 'Acquire', 0x40000007: 'SampleObservationTime', 0x4000000B: 'TimeBetweenStacks', 0x4000000C: 'Name', 0x4000000D: 'Collimator1Name', 0x4000000E: 'Collimator1Position', 0x4000000F: 'Collimator2Name', 0x40000010: 'Collimator2Position', 0x40000011: 'IsBleachTrack', 0x40000012: 'IsBleachAfterScanNumber', 0x40000013: 'BleachScanNumber', 0x40000014: 'TriggerIn', 0x40000015: 'TriggerOut', 0x40000016: 'IsRatioTrack', 0x40000017: 'BleachCount', 0x40000018: 'SpiCenterWavelength', 0x40000019: 'PixelTime', 0x40000021: 'CondensorFrontlens', 0x40000023: 'FieldStopValue', 0x40000024: 'IdCondensorAperture', 0x40000025: 'CondensorAperture', 0x40000026: 'IdCondensorRevolver', 0x40000027: 'CondensorFilter', 0x40000028: 'IdTransmissionFilter1', 0x40000029: 'IdTransmission1', 0x40000030: 'IdTransmissionFilter2', 0x40000031: 'IdTransmission2', 0x40000032: 'RepeatBleach', 0x40000033: 'EnableSpotBleachPos', 0x40000034: 'SpotBleachPosx', 0x40000035: 'SpotBleachPosy', 0x40000036: 'SpotBleachPosz', 0x40000037: 'IdTubelens', 0x40000038: 'IdTubelensPosition', 0x40000039: 'TransmittedLight', 0x4000003A: 'ReflectedLight', 0x4000003B: 'SimultanGrabAndBleach', 0x4000003C: 'BleachPixelTime', # Laser 0x50000001: 'Name', 0x50000002: 'Acquire', 0x50000003: 'Power', # DetectionChannel 0x70000001: 'IntegrationMode', 0x70000002: 'SpecialMode', 0x70000003: 'DetectorGainFirst', 0x70000004: 'DetectorGainLast', 0x70000005: 'AmplifierGainFirst', 0x70000006: 'AmplifierGainLast', 0x70000007: 'AmplifierOffsFirst', 0x70000008: 'AmplifierOffsLast', 0x70000009: 'PinholeDiameter', 0x7000000A: 'CountingTrigger', 0x7000000B: 'Acquire', 0x7000000C: 'PointDetectorName', 0x7000000D: 'AmplifierName', 0x7000000E: 'PinholeName', 0x7000000F: 'FilterSetName', 0x70000010: 'FilterName', 0x70000013: 'IntegratorName', 0x70000014: 'ChannelName', 0x70000015: 'DetectorGainBc1', 0x70000016: 'DetectorGainBc2', 0x70000017: 'AmplifierGainBc1', 0x70000018: 'AmplifierGainBc2', 0x70000019: 'AmplifierOffsetBc1', 0x70000020: 'AmplifierOffsetBc2', 0x70000021: 'SpectralScanChannels', 0x70000022: 'SpiWavelengthStart', 0x70000023: 'SpiWavelengthStop', 0x70000026: 'DyeName', 0x70000027: 'DyeFolder', # IlluminationChannel 0x90000001: 'Name', 0x90000002: 'Power', 0x90000003: 'Wavelength', 0x90000004: 'Aquire', 0x90000005: 'DetchannelName', 0x90000006: 'PowerBc1', 0x90000007: 'PowerBc2', # BeamSplitter 0xB0000001: 'FilterSet', 0xB0000002: 'Filter', 0xB0000003: 'Name', # DataChannel 0xD0000001: 'Name', 0xD0000003: 'Acquire', 0xD0000004: 'Color', 0xD0000005: 'SampleType', 0xD0000006: 'BitsPerSample', 0xD0000007: 'RatioType', 0xD0000008: 'RatioTrack1', 0xD0000009: 'RatioTrack2', 0xD000000A: 'RatioChannel1', 0xD000000B: 'RatioChannel2', 0xD000000C: 'RatioConst1', 0xD000000D: 'RatioConst2', 0xD000000E: 'RatioConst3', 0xD000000F: 'RatioConst4', 0xD0000010: 'RatioConst5', 0xD0000011: 'RatioConst6', 0xD0000012: 'RatioFirstImages1', 0xD0000013: 'RatioFirstImages2', 0xD0000014: 'DyeName', 0xD0000015: 'DyeFolder', 0xD0000016: 'Spectrum', 0xD0000017: 'Acquire', # Timer 0x12000001: 'Name', 0x12000002: 'Description', 0x12000003: 'Interval', 0x12000004: 'TriggerIn', 0x12000005: 'TriggerOut', 0x12000006: 'ActivationTime', 0x12000007: 'ActivationNumber', # Marker 0x14000001: 'Name', 0x14000002: 'Description', 0x14000003: 'TriggerIn', 0x14000004: 'TriggerOut', } @cached_property def CZ_LSM_LUTTYPE(self): # TODO: type this class CZ_LSM_LUTTYPE(enum.IntEnum): NORMAL = 0 ORIGINAL = 1 RAMP = 2 POLYLINE = 3 SPLINE = 4 GAMMA = 5 return CZ_LSM_LUTTYPE @cached_property def CZ_LSM_SUBBLOCK_TYPE(self): # TODO: type this class CZ_LSM_SUBBLOCK_TYPE(enum.IntEnum): END = 0 GAMMA = 1 BRIGHTNESS = 2 CONTRAST = 3 RAMP = 4 KNOTS = 5 PALETTE_12_TO_12 = 6 return CZ_LSM_SUBBLOCK_TYPE @property def NIH_IMAGE_HEADER(self): # TODO: type this return [ ('FileID', 'S8'), ('nLines', 'i2'), ('PixelsPerLine', 'i2'), ('Version', 'i2'), ('OldLutMode', 'i2'), ('OldnColors', 'i2'), ('Colors', 'u1', (3, 32)), ('OldColorStart', 'i2'), ('ColorWidth', 'i2'), ('ExtraColors', 'u2', (6, 3)), ('nExtraColors', 'i2'), ('ForegroundIndex', 'i2'), ('BackgroundIndex', 'i2'), ('XScale', 'f8'), ('Unused2', 'i2'), ('Unused3', 'i2'), ('UnitsID', 'i2'), # NIH_UNITS_TYPE ('p1', [('x', 'i2'), ('y', 'i2')]), ('p2', [('x', 'i2'), ('y', 'i2')]), ('CurveFitType', 'i2'), # NIH_CURVEFIT_TYPE ('nCoefficients', 'i2'), ('Coeff', 'f8', 6), ('UMsize', 'u1'), ('UM', 'S15'), ('UnusedBoolean', 'u1'), ('BinaryPic', 'b1'), ('SliceStart', 'i2'), ('SliceEnd', 'i2'), ('ScaleMagnification', 'f4'), ('nSlices', 'i2'), ('SliceSpacing', 'f4'), ('CurrentSlice', 'i2'), ('FrameInterval', 'f4'), ('PixelAspectRatio', 'f4'), ('ColorStart', 'i2'), ('ColorEnd', 'i2'), ('nColors', 'i2'), ('Fill1', '3u2'), ('Fill2', '3u2'), ('Table', 'u1'), # NIH_COLORTABLE_TYPE ('LutMode', 'u1'), # NIH_LUTMODE_TYPE ('InvertedTable', 'b1'), ('ZeroClip', 'b1'), ('XUnitSize', 'u1'), ('XUnit', 'S11'), ('StackType', 'i2'), # NIH_STACKTYPE_TYPE # ('UnusedBytes', 'u1', 200) ] @property def NIH_COLORTABLE_TYPE(self) -> tuple[str, ...]: return ( 'CustomTable', 'AppleDefault', 'Pseudo20', 'Pseudo32', 'Rainbow', 'Fire1', 'Fire2', 'Ice', 'Grays', 'Spectrum', ) @property def NIH_LUTMODE_TYPE(self) -> tuple[str, ...]: return ( 'PseudoColor', 'OldAppleDefault', 'OldSpectrum', 'GrayScale', 'ColorLut', 'CustomGrayscale', ) @property def NIH_CURVEFIT_TYPE(self) -> tuple[str, ...]: return ( 'StraightLine', 'Poly2', 'Poly3', 'Poly4', 'Poly5', 'ExpoFit', 'PowerFit', 'LogFit', 'RodbardFit', 'SpareFit1', 'Uncalibrated', 'UncalibratedOD', ) @property def NIH_UNITS_TYPE(self) -> tuple[str, ...]: return ( 'Nanometers', 'Micrometers', 'Millimeters', 'Centimeters', 'Meters', 'Kilometers', 'Inches', 'Feet', 'Miles', 'Pixels', 'OtherUnits', ) @property def TVIPS_HEADER_V1(self) -> list[tuple[str, str]]: # TVIPS TemData structure from EMMENU Help file return [ ('Version', 'i4'), ('CommentV1', 'S80'), ('HighTension', 'i4'), ('SphericalAberration', 'i4'), ('IlluminationAperture', 'i4'), ('Magnification', 'i4'), ('PostMagnification', 'i4'), ('FocalLength', 'i4'), ('Defocus', 'i4'), ('Astigmatism', 'i4'), ('AstigmatismDirection', 'i4'), ('BiprismVoltage', 'i4'), ('SpecimenTiltAngle', 'i4'), ('SpecimenTiltDirection', 'i4'), ('IlluminationTiltDirection', 'i4'), ('IlluminationTiltAngle', 'i4'), ('ImageMode', 'i4'), ('EnergySpread', 'i4'), ('ChromaticAberration', 'i4'), ('ShutterType', 'i4'), ('DefocusSpread', 'i4'), ('CcdNumber', 'i4'), ('CcdSize', 'i4'), ('OffsetXV1', 'i4'), ('OffsetYV1', 'i4'), ('PhysicalPixelSize', 'i4'), ('Binning', 'i4'), ('ReadoutSpeed', 'i4'), ('GainV1', 'i4'), ('SensitivityV1', 'i4'), ('ExposureTimeV1', 'i4'), ('FlatCorrected', 'i4'), ('DeadPxCorrected', 'i4'), ('ImageMean', 'i4'), ('ImageStd', 'i4'), ('DisplacementX', 'i4'), ('DisplacementY', 'i4'), ('DateV1', 'i4'), ('TimeV1', 'i4'), ('ImageMin', 'i4'), ('ImageMax', 'i4'), ('ImageStatisticsQuality', 'i4'), ] @property def TVIPS_HEADER_V2(self) -> list[tuple[str, str]]: return [ ('ImageName', 'V160'), # utf16 ('ImageFolder', 'V160'), ('ImageSizeX', 'i4'), ('ImageSizeY', 'i4'), ('ImageSizeZ', 'i4'), ('ImageSizeE', 'i4'), ('ImageDataType', 'i4'), ('Date', 'i4'), ('Time', 'i4'), ('Comment', 'V1024'), ('ImageHistory', 'V1024'), ('Scaling', '16f4'), ('ImageStatistics', '16c16'), ('ImageType', 'i4'), ('ImageDisplaType', 'i4'), ('PixelSizeX', 'f4'), # distance between two px in x, [nm] ('PixelSizeY', 'f4'), # distance between two px in y, [nm] ('ImageDistanceZ', 'f4'), ('ImageDistanceE', 'f4'), ('ImageMisc', '32f4'), ('TemType', 'V160'), ('TemHighTension', 'f4'), ('TemAberrations', '32f4'), ('TemEnergy', '32f4'), ('TemMode', 'i4'), ('TemMagnification', 'f4'), ('TemMagnificationCorrection', 'f4'), ('PostMagnification', 'f4'), ('TemStageType', 'i4'), ('TemStagePosition', '5f4'), # x, y, z, a, b ('TemImageShift', '2f4'), ('TemBeamShift', '2f4'), ('TemBeamTilt', '2f4'), ('TilingParameters', '7f4'), # 0: tiling? 1:x 2:y 3: max x # 4: max y 5: overlap x 6: overlap y ('TemIllumination', '3f4'), # 0: spotsize 1: intensity ('TemShutter', 'i4'), ('TemMisc', '32f4'), ('CameraType', 'V160'), ('PhysicalPixelSizeX', 'f4'), ('PhysicalPixelSizeY', 'f4'), ('OffsetX', 'i4'), ('OffsetY', 'i4'), ('BinningX', 'i4'), ('BinningY', 'i4'), ('ExposureTime', 'f4'), ('Gain', 'f4'), ('ReadoutRate', 'f4'), ('FlatfieldDescription', 'V160'), ('Sensitivity', 'f4'), ('Dose', 'f4'), ('CamMisc', '32f4'), ('FeiMicroscopeInformation', 'V1024'), ('FeiSpecimenInformation', 'V1024'), ('Magic', 'u4'), ] @property def MM_HEADER(self) -> list[tuple[Any, ...]]: # Olympus FluoView MM_Header MM_DIMENSION = [ ('Name', 'S16'), ('Size', 'i4'), ('Origin', 'f8'), ('Resolution', 'f8'), ('Unit', 'S64'), ] return [ ('HeaderFlag', 'i2'), ('ImageType', 'u1'), ('ImageName', 'S257'), ('OffsetData', 'u4'), ('PaletteSize', 'i4'), ('OffsetPalette0', 'u4'), ('OffsetPalette1', 'u4'), ('CommentSize', 'i4'), ('OffsetComment', 'u4'), ('Dimensions', MM_DIMENSION, 10), ('OffsetPosition', 'u4'), ('MapType', 'i2'), ('MapMin', 'f8'), ('MapMax', 'f8'), ('MinValue', 'f8'), ('MaxValue', 'f8'), ('OffsetMap', 'u4'), ('Gamma', 'f8'), ('Offset', 'f8'), ('GrayChannel', MM_DIMENSION), ('OffsetThumbnail', 'u4'), ('VoiceField', 'i4'), ('OffsetVoiceField', 'u4'), ] @property def MM_DIMENSIONS(self) -> dict[str, str]: # map FluoView MM_Header.Dimensions to axes characters return { 'X': 'X', 'Y': 'Y', 'Z': 'Z', 'T': 'T', 'CH': 'C', 'WAVELENGTH': 'C', 'TIME': 'T', 'XY': 'R', 'EVENT': 'V', 'EXPOSURE': 'L', } @property def UIC_TAGS(self) -> list[tuple[str, Any]]: # map Universal Imaging Corporation MetaMorph internal tag ids to # name and type from fractions import Fraction return [ ('AutoScale', int), ('MinScale', int), ('MaxScale', int), ('SpatialCalibration', int), ('XCalibration', Fraction), ('YCalibration', Fraction), ('CalibrationUnits', str), ('Name', str), ('ThreshState', int), ('ThreshStateRed', int), ('tagid_10', None), # undefined ('ThreshStateGreen', int), ('ThreshStateBlue', int), ('ThreshStateLo', int), ('ThreshStateHi', int), ('Zoom', int), ('CreateTime', julian_datetime), ('LastSavedTime', julian_datetime), ('currentBuffer', int), ('grayFit', None), ('grayPointCount', None), ('grayX', Fraction), ('grayY', Fraction), ('grayMin', Fraction), ('grayMax', Fraction), ('grayUnitName', str), ('StandardLUT', int), ('wavelength', int), ('StagePosition', '(%i,2,2)u4'), # N xy positions as fract ('CameraChipOffset', '(%i,2,2)u4'), # N xy offsets as fract ('OverlayMask', None), ('OverlayCompress', None), ('Overlay', None), ('SpecialOverlayMask', None), ('SpecialOverlayCompress', None), ('SpecialOverlay', None), ('ImageProperty', read_uic_property), ('StageLabel', '%ip'), # N str ('AutoScaleLoInfo', Fraction), ('AutoScaleHiInfo', Fraction), ('AbsoluteZ', '(%i,2)u4'), # N fractions ('AbsoluteZValid', '(%i,)u4'), # N long ('Gamma', 'I'), # 'I' uses offset ('GammaRed', 'I'), ('GammaGreen', 'I'), ('GammaBlue', 'I'), ('CameraBin', '2I'), ('NewLUT', int), ('ImagePropertyEx', None), ('PlaneProperty', int), ('UserLutTable', '(256,3)u1'), ('RedAutoScaleInfo', int), ('RedAutoScaleLoInfo', Fraction), ('RedAutoScaleHiInfo', Fraction), ('RedMinScaleInfo', int), ('RedMaxScaleInfo', int), ('GreenAutoScaleInfo', int), ('GreenAutoScaleLoInfo', Fraction), ('GreenAutoScaleHiInfo', Fraction), ('GreenMinScaleInfo', int), ('GreenMaxScaleInfo', int), ('BlueAutoScaleInfo', int), ('BlueAutoScaleLoInfo', Fraction), ('BlueAutoScaleHiInfo', Fraction), ('BlueMinScaleInfo', int), ('BlueMaxScaleInfo', int), # ('OverlayPlaneColor', read_uic_overlay_plane_color), ] @property def PILATUS_HEADER(self) -> dict[str, Any]: # PILATUS CBF Header Specification, Version 1.4 # map key to [value_indices], type return { 'Detector': ([slice(1, None)], str), 'Pixel_size': ([1, 4], float), 'Silicon': ([3], float), 'Exposure_time': ([1], float), 'Exposure_period': ([1], float), 'Tau': ([1], float), 'Count_cutoff': ([1], int), 'Threshold_setting': ([1], float), 'Gain_setting': ([1, 2], str), 'N_excluded_pixels': ([1], int), 'Excluded_pixels': ([1], str), 'Flat_field': ([1], str), 'Trim_file': ([1], str), 'Image_path': ([1], str), # optional 'Wavelength': ([1], float), 'Energy_range': ([1, 2], float), 'Detector_distance': ([1], float), 'Detector_Voffset': ([1], float), 'Beam_xy': ([1, 2], float), 'Flux': ([1], str), 'Filter_transmission': ([1], float), 'Start_angle': ([1], float), 'Angle_increment': ([1], float), 'Detector_2theta': ([1], float), 'Polarization': ([1], float), 'Alpha': ([1], float), 'Kappa': ([1], float), 'Phi': ([1], float), 'Phi_increment': ([1], float), 'Chi': ([1], float), 'Chi_increment': ([1], float), 'Oscillation_axis': ([slice(1, None)], str), 'N_oscillations': ([1], int), 'Start_position': ([1], float), 'Position_increment': ([1], float), 'Shutter_time': ([1], float), 'Omega': ([1], float), 'Omega_increment': ([1], float), } @cached_property def ALLOCATIONGRANULARITY(self) -> int: # alignment for writing contiguous data to TIFF import mmap return mmap.ALLOCATIONGRANULARITY @cached_property def MAXWORKERS(self) -> int: """Default maximum number of threads for de/compressing segments. The value of the ``TIFFFILE_NUM_THREADS`` environment variable if set, else half the CPU cores up to 32. """ if 'TIFFFILE_NUM_THREADS' in os.environ: return max(1, int(os.environ['TIFFFILE_NUM_THREADS'])) cpu_count: int | None try: cpu_count = len( os.sched_getaffinity(0) # type: ignore[attr-defined] ) except AttributeError: cpu_count = os.cpu_count() if cpu_count is None: return 1 return min(32, max(1, cpu_count // 2)) @cached_property def MAXIOWORKERS(self) -> int: """Default maximum number of I/O threads for reading file sequences. The value of the ``TIFFFILE_NUM_IOTHREADS`` environment variable if set, else 4 more than the number of CPU cores up to 32. """ if 'TIFFFILE_NUM_IOTHREADS' in os.environ: return max(1, int(os.environ['TIFFFILE_NUM_IOTHREADS'])) cpu_count: int | None try: cpu_count = len( os.sched_getaffinity(0) # type: ignore[attr-defined] ) except AttributeError: cpu_count = os.cpu_count() if cpu_count is None: return 5 return min(32, cpu_count + 4) BUFFERSIZE: int = 268435456 """Default number of bytes to read or encode in one pass (256 MB).""" TIFF = _TIFF() def read_tags( fh: FileHandle, /, byteorder: ByteOrder, offsetsize: int, tagnames: TiffTagRegistry, *, maxifds: int | None = None, customtags: ( dict[int, Callable[[FileHandle, ByteOrder, int, int, int], Any]] | None ) = None, ) -> list[dict[str, Any]]: """Read tag values from chain of IFDs. Parameters: fh: Binary file handle to read from. The file handle position must be at a valid IFD header. byteorder: Byte order of TIFF file. offsetsize: Size of offsets in TIFF file (8 for BigTIFF, else 4). tagnames: Map of tag codes to names. For example, :py:class:`_TIFF.GPS_TAGS` or :py:class:`_TIFF.IOP_TAGS`. maxifds: Maximum number of IFDs to read. By default, read the whole IFD chain. customtags: Mapping of tag codes to functions reading special tag value from file. Raises: TiffFileError: Invalid TIFF structure. Notes: This implementation does not support 64-bit NDPI files. """ code: int dtype: int count: int valuebytes: bytes valueoffset: int if offsetsize == 4: offsetformat = byteorder + 'I' tagnosize = 2 tagnoformat = byteorder + 'H' tagsize = 12 tagformat1 = byteorder + 'HH' tagformat2 = byteorder + 'I4s' elif offsetsize == 8: offsetformat = byteorder + 'Q' tagnosize = 8 tagnoformat = byteorder + 'Q' tagsize = 20 tagformat1 = byteorder + 'HH' tagformat2 = byteorder + 'Q8s' else: raise ValueError('invalid offset size') if customtags is None: customtags = {} if maxifds is None: maxifds = 2**32 result: list[dict[str, Any]] = [] unpack = struct.unpack offset = fh.tell() while len(result) < maxifds: # loop over IFDs try: tagno = unpack(tagnoformat, fh.read(tagnosize))[0] if tagno > 4096: raise TiffFileError(f'suspicious number of tags {tagno}') except Exception as exc: logger().error( f' corrupted tag list @{offset} ' f'raised {exc!r:.128}' ) break tags = {} data = fh.read(tagsize * tagno) pos = fh.tell() index = 0 for _ in range(tagno): code, dtype = unpack(tagformat1, data[index : index + 4]) count, valuebytes = unpack( tagformat2, data[index + 4 : index + tagsize] ) index += tagsize name = tagnames.get(code, str(code)) try: valueformat = TIFF.DATA_FORMATS[dtype] except KeyError: logger().error(f'invalid data type {dtype!r} for tag #{code}') continue valuesize = count * struct.calcsize(valueformat) if valuesize > offsetsize or code in customtags: valueoffset = unpack(offsetformat, valuebytes)[0] if valueoffset < 8 or valueoffset + valuesize > fh.size: logger().error( f'invalid value offset {valueoffset} for tag #{code}' ) continue fh.seek(valueoffset) if code in customtags: readfunc = customtags[code] value = readfunc(fh, byteorder, dtype, count, offsetsize) elif dtype in {1, 2, 7}: # BYTES, ASCII, UNDEFINED value = fh.read(valuesize) if len(value) != valuesize: logger().warning( ' ' f'could not read all values for tag #{code}' ) elif code in tagnames: fmt = ( f'{byteorder}' f'{count * int(valueformat[0])}' f'{valueformat[1]}' ) value = unpack(fmt, fh.read(valuesize)) else: value = read_numpy(fh, byteorder, dtype, count, offsetsize) elif dtype in {1, 2, 7}: # BYTES, ASCII, UNDEFINED value = valuebytes[:valuesize] else: fmt = ( f'{byteorder}' f'{count * int(valueformat[0])}' f'{valueformat[1]}' ) value = unpack(fmt, valuebytes[:valuesize]) process = ( code not in customtags and code not in TIFF.TAG_TUPLE and dtype != 7 # UNDEFINED ) if process and dtype == 2: # TIFF ASCII fields can contain multiple strings, # each terminated with a NUL value = value.rstrip(b'\x00') try: value = value.decode('utf-8').strip() except UnicodeDecodeError: try: value = value.decode('cp1252').strip() except UnicodeDecodeError as exc: logger().warning( ' coercing invalid ASCII to ' f'bytes for tag #{code}, due to {exc!r:.128}' ) else: if code in TIFF.TAG_ENUM: t = TIFF.TAG_ENUM[code] try: value = tuple(t(v) for v in value) except ValueError as exc: if code not in {259, 317}: # ignore compression/predictor logger().warning( f' tag #{code} ' f'raised {exc!r:.128}' ) if process and len(value) == 1: value = value[0] tags[name] = value result.append(tags) # read offset to next page fh.seek(pos) offset = unpack(offsetformat, fh.read(offsetsize))[0] if offset == 0: break if offset >= fh.size: logger().error(f' invalid next page {offset=}') break fh.seek(offset) return result def read_exif_ifd( fh: FileHandle, byteorder: ByteOrder, dtype: int, count: int, offsetsize: int, /, ) -> dict[str, Any]: """Read EXIF tags from file.""" exif = read_tags(fh, byteorder, offsetsize, TIFF.EXIF_TAGS, maxifds=1)[0] for name in ('ExifVersion', 'FlashpixVersion'): try: exif[name] = bytes2str(exif[name]) except Exception: pass if 'UserComment' in exif: idcode = exif['UserComment'][:8] try: if idcode == b'ASCII\x00\x00\x00': exif['UserComment'] = bytes2str(exif['UserComment'][8:]) elif idcode == b'UNICODE\x00': exif['UserComment'] = exif['UserComment'][8:].decode('utf-16') except Exception: pass return exif def read_gps_ifd( fh: FileHandle, byteorder: ByteOrder, dtype: int, count: int, offsetsize: int, /, ) -> dict[str, Any]: """Read GPS tags from file.""" return read_tags(fh, byteorder, offsetsize, TIFF.GPS_TAGS, maxifds=1)[0] def read_interoperability_ifd( fh: FileHandle, byteorder: ByteOrder, dtype: int, count: int, offsetsize: int, /, ) -> dict[str, Any]: """Read Interoperability tags from file.""" return read_tags(fh, byteorder, offsetsize, TIFF.IOP_TAGS, maxifds=1)[0] def read_bytes( fh: FileHandle, byteorder: ByteOrder, dtype: int, count: int, offsetsize: int, /, ) -> bytes: """Read tag data from file.""" count *= numpy.dtype( 'B' if dtype == 2 else byteorder + TIFF.DATA_FORMATS[dtype][-1] ).itemsize data = fh.read(count) if len(data) != count: logger().warning( ' ' f'failed to read {count} bytes, got {len(data)})' ) return data def read_utf8( fh: FileHandle, byteorder: ByteOrder, dtype: int, count: int, offsetsize: int, /, ) -> str: """Read unicode tag value from file.""" return fh.read(count).decode() def read_numpy( fh: FileHandle, byteorder: ByteOrder, dtype: int, count: int, offsetsize: int, /, ) -> NDArray[Any]: """Read NumPy array tag value from file.""" return fh.read_array( 'b' if dtype == 2 else byteorder + TIFF.DATA_FORMATS[dtype][-1], count ) def read_colormap( fh: FileHandle, byteorder: ByteOrder, dtype: int, count: int, offsetsize: int, /, ) -> NDArray[Any]: """Read ColorMap or TransferFunction tag value from file.""" cmap = fh.read_array(byteorder + TIFF.DATA_FORMATS[dtype][-1], count) if count % 3 == 0: cmap.shape = (3, -1) return cmap def read_json( fh: FileHandle, byteorder: ByteOrder, dtype: int, count: int, offsetsize: int, /, ) -> Any: """Read JSON tag value from file.""" data = fh.read(count) try: return json.loads(bytes2str(data, 'utf-8')) except ValueError as exc: logger().warning(f' raised {exc!r:.128}') return None def read_mm_header( fh: FileHandle, byteorder: ByteOrder, dtype: int, count: int, offsetsize: int, /, ) -> dict[str, Any]: """Read FluoView mm_header tag value from file.""" meta = recarray2dict( fh.read_record(numpy.dtype(TIFF.MM_HEADER), byteorder=byteorder) ) meta['Dimensions'] = [ (bytes2str(d[0]).strip(), d[1], d[2], d[3], bytes2str(d[4]).strip()) for d in meta['Dimensions'] ] d = meta['GrayChannel'] meta['GrayChannel'] = ( bytes2str(d[0]).strip(), d[1], d[2], d[3], bytes2str(d[4]).strip(), ) return meta def read_mm_stamp( fh: FileHandle, byteorder: ByteOrder, dtype: int, count: int, offsetsize: int, /, ) -> NDArray[Any]: """Read FluoView mm_stamp tag value from file.""" return fh.read_array(byteorder + 'f8', 8) def read_uic1tag( fh: FileHandle, byteorder: ByteOrder, dtype: int, count: int, offsetsize: int, /, planecount: int = 0, ) -> dict[str, Any]: """Read MetaMorph STK UIC1Tag value from file. Return empty dictionary if planecount is unknown. """ if dtype not in {4, 5} or byteorder != '<': raise ValueError(f'invalid UIC1Tag {byteorder}{dtype}') result = {} if dtype == 5: # pre MetaMorph 2.5 (not tested) values = fh.read_array(' 1 and tagid in {28, 29, 37, 40, 41}: # silently skip unexpected tags fh.read(4) continue name, value = read_uic_tag(fh, tagid, planecount, True) if name == 'PlaneProperty': pos = fh.tell() fh.seek(value + 4) result.setdefault(name, []).append(read_uic_property(fh)) fh.seek(pos) else: result[name] = value return result def read_uic2tag( fh: FileHandle, byteorder: ByteOrder, dtype: int, count: int, offsetsize: int, /, ) -> dict[str, NDArray[Any]]: """Read MetaMorph STK UIC2Tag value from file.""" if dtype != 5 or byteorder != '<': raise ValueError('invalid UIC2Tag') values = fh.read_array(' dict[str, NDArray[Any]]: """Read MetaMorph STK UIC3Tag value from file.""" if dtype != 5 or byteorder != '<': raise ValueError('invalid UIC3Tag') values = fh.read_array(' dict[str, NDArray[Any]]: """Read MetaMorph STK UIC4Tag value from file.""" if dtype != 4 or byteorder != '<': raise ValueError('invalid UIC4Tag') result = {} while True: tagid: int = struct.unpack(' tuple[str, Any]: """Read single UIC tag value from file and return tag name and value. UIC1Tags use an offset. """ def read_int() -> int: return int(struct.unpack(' tuple[int, int]: value = struct.unpack('<2I', fh.read(8)) return int(value[0]), (value[1]) try: name, dtype = TIFF.UIC_TAGS[tagid] except IndexError: # unknown tag return f'_TagId{tagid}', read_int() Fraction = TIFF.UIC_TAGS[4][1] if offset: pos = fh.tell() if dtype not in {int, None}: off = read_int() if off < 8: # undocumented cases, or invalid offset if dtype is str: return name, '' if tagid == 41: # AbsoluteZValid return name, off logger().warning( ' ' f'invalid offset for tag {name!r} @{off}' ) return name, off fh.seek(off) value: Any if dtype is None: # skip name = '_' + name value = read_int() elif dtype is int: # int value = read_int() elif dtype is Fraction: # fraction value = read_int2() value = value[0] / value[1] elif dtype is julian_datetime: # datetime value = read_int2() try: value = julian_datetime(*value) except Exception as exc: value = None logger().warning( f' reading {name} raised {exc!r:.128}' ) elif dtype is read_uic_property: # ImagePropertyEx value = read_uic_property(fh) elif dtype is str: # pascal string size = read_int() if 0 <= size < 2**10: value = struct.unpack(f'{size}s', fh.read(size))[0][:-1] value = bytes2str(value) elif offset: value = '' logger().warning( f' invalid string in tag {name!r}' ) else: raise ValueError(f'invalid string size {size}') elif planecount == 0: value = None elif dtype == '%ip': # sequence of pascal strings value = [] for _ in range(planecount): size = read_int() if 0 <= size < 2**10: string = struct.unpack(f'{size}s', fh.read(size))[0][:-1] value.append(bytes2str(string)) elif offset: logger().warning( f' invalid string in tag {name!r}' ) else: raise ValueError(f'invalid string size: {size}') else: # struct or numpy type dtype = '<' + dtype if '%i' in dtype: dtype = dtype % planecount if '(' in dtype: # numpy type value = fh.read_array(dtype, 1)[0] if value.shape[-1] == 2: # assume fractions value = value[..., 0] / value[..., 1] else: # struct format value = struct.unpack(dtype, fh.read(struct.calcsize(dtype))) if len(value) == 1: value = value[0] if offset: fh.seek(pos + 4) return name, value def read_uic_property(fh: FileHandle, /) -> dict[str, Any]: """Read UIC ImagePropertyEx or PlaneProperty tag from file.""" size = struct.unpack('B', fh.read(1))[0] name = bytes2str(struct.unpack(f'{size}s', fh.read(size))[0]) flags, prop = struct.unpack(' dict[str, Any]: """Read CZ_LSMINFO tag value from file.""" if byteorder != '<': raise ValueError('invalid CZ_LSMINFO structure') magic_number, structure_size = struct.unpack(' structure_size: break lsminfo.append((name, typestr)) else: lsminfo = CZ_LSMINFO result = recarray2dict( fh.read_record(numpy.dtype(lsminfo), byteorder=byteorder) ) # read LSM info subrecords at offsets for name, reader in TIFF.CZ_LSMINFO_READERS.items(): if reader is None: continue offset = result.get('Offset' + name, 0) if offset < 8: continue fh.seek(offset) try: result[name] = reader(fh) except ValueError: pass return result def read_lsm_channeldatatypes(fh: FileHandle, /) -> NDArray[Any]: """Read LSM channel data type from file.""" size = struct.unpack(' NDArray[Any]: """Read LSM channel wavelength ranges from file.""" size = struct.unpack(' NDArray[Any]: """Read LSM positions from file.""" size = struct.unpack(' NDArray[Any]: """Read LSM time stamps from file.""" size, count = struct.unpack(' invalid LSM TimeStamps block' ) return numpy.empty((0,), ' list[tuple[float, int, str]]: """Read LSM events from file and return as list of (time, type, text).""" count = struct.unpack(' 0: esize, etime, etype = struct.unpack(' dict[str, Any]: """Read LSM ChannelColors structure from file.""" result = {'Mono': False, 'Colors': [], 'ColorNames': []} pos = fh.tell() (size, ncolors, nnames, coffset, noffset, mono) = struct.unpack( ' ' 'invalid LSM ChannelColors structure' ) return result result['Mono'] = bool(mono) # Colors fh.seek(pos + coffset) colors = fh.read_array(numpy.uint8, count=ncolors * 4) colors = colors.reshape((ncolors, 4)) result['Colors'] = colors.tolist() # ColorNames fh.seek(pos + noffset) buffer = fh.read(size - noffset) names = [] while len(buffer) > 4: size = struct.unpack(' dict[str, Any]: """Read LSM lookup tables from file.""" result: dict[str, Any] = {} ( size, nsubblocks, nchannels, luttype, advanced, currentchannel, ) = struct.unpack(' ' 'invalid LSM LookupTables structure' ) return result fh.read(9 * 4) # reserved result['LutType'] = TIFF.CZ_LSM_LUTTYPE(luttype) result['Advanced'] = advanced result['NumberChannels'] = nchannels result['CurrentChannel'] = currentchannel result['SubBlocks'] = subblocks = [] for _ in range(nsubblocks): sbtype = struct.unpack(' ' f'invalid LSM SubBlock type {sbtype}' ) break subblocks.append( {'Type': TIFF.CZ_LSM_SUBBLOCK_TYPE(sbtype), 'Data': data} ) return result def read_lsm_scaninfo(fh: FileHandle, /) -> dict[str, Any]: """Read LSM ScanInfo structure from file.""" value: Any block: dict[str, Any] = {} blocks = [block] unpack = struct.unpack if struct.unpack(' invalid LSM ScanInfo structure' ) return block fh.read(8) while True: entry, dtype, size = unpack(' dict[str, Any]: """Read OlympusSIS structure from file. No specification is available. Only few fields are known. """ result: dict[str, Any] = {} (magic, minute, hour, day, month, year, name, tagcount) = struct.unpack( '<4s6xhhhhh6x32sh', fh.read(60) ) if magic != b'SIS0': raise ValueError('invalid OlympusSIS structure') result['name'] = bytes2str(name) try: result['datetime'] = DateTime( 1900 + year, month + 1, day, hour, minute ) except ValueError: pass data = fh.read(8 * tagcount) for i in range(0, tagcount * 8, 8): tagtype, count, offset = struct.unpack(' dict[str, Any]: """Read OlympusSIS INI string from file.""" try: return olympusini_metadata(bytes2str(fh.read(count))) except Exception as exc: logger().warning(f' raised {exc!r:.128}') return {} def read_tvips_header( fh: FileHandle, byteorder: ByteOrder, dtype: int, count: int, offsetsize: int, /, ) -> dict[str, Any]: """Read TVIPS EM-MENU headers from file.""" result: dict[str, Any] = {} header_v1 = TIFF.TVIPS_HEADER_V1 header = fh.read_record(numpy.dtype(header_v1), byteorder=byteorder) for name, typestr in header_v1: result[name] = header[name].tolist() if header['Version'] == 2: header_v2 = TIFF.TVIPS_HEADER_V2 header = fh.read_record(numpy.dtype(header_v2), byteorder=byteorder) if header['Magic'] != 0xAAAAAAAA: logger().warning( ' invalid TVIPS v2 magic number' ) return {} # decode utf16 strings for name, typestr in header_v2: if typestr.startswith('V'): result[name] = bytes2str( header[name].tobytes(), 'utf-16', 'ignore' ) else: result[name] = header[name].tolist() # convert nm to m for axis in 'XY': header['PhysicalPixelSize' + axis] /= 1e9 header['PixelSize' + axis] /= 1e9 elif header.version != 1: logger().warning( ' unknown TVIPS header version' ) return {} return result def read_fei_metadata( fh: FileHandle, byteorder: ByteOrder, dtype: int, count: int, offsetsize: int, /, ) -> dict[str, Any]: """Read FEI SFEG/HELIOS headers from file.""" result: dict[str, Any] = {} section: dict[str, Any] = {} data = bytes2str(fh.read(count)) for line in data.splitlines(): line = line.strip() if line.startswith('['): section = {} result[line[1:-1]] = section continue try: key, value = line.split('=') except ValueError: continue section[key] = astype(value) return result def read_cz_sem( fh: FileHandle, byteorder: ByteOrder, dtype: int, count: int, offsetsize: int, /, ) -> dict[str, Any]: """Read Zeiss SEM tag from file. See https://sourceforge.net/p/gwyddion/mailman/message/29275000/ for unnamed values. """ result: dict[str, Any] = {'': ()} value: Any key = None data = bytes2str(fh.read(count)) for line in data.splitlines(): if line.isupper(): key = line.lower() elif key: try: name, value = line.split('=') except ValueError: try: name, value = line.split(':', 1) except Exception: continue value = value.strip() unit = '' try: v, u = value.split() number = astype(v, (int, float)) if number != v: value = number unit = u except Exception: number = astype(value, (int, float)) if number != value: value = number if value in {'No', 'Off'}: value = False elif value in {'Yes', 'On'}: value = True result[key] = (name.strip(), value) if unit: result[key] += (unit,) key = None else: result[''] += (astype(line, (int, float)),) return result def read_nih_image_header( fh: FileHandle, byteorder: ByteOrder, dtype: int, count: int, offsetsize: int, /, ) -> dict[str, Any]: """Read NIH_IMAGE_HEADER tag value from file.""" arr = fh.read_record(TIFF.NIH_IMAGE_HEADER, byteorder=byteorder) arr = arr.view(arr.dtype.newbyteorder(byteorder)) result = recarray2dict(arr) result['XUnit'] = result['XUnit'][: result['XUnitSize']] result['UM'] = result['UM'][: result['UMsize']] return result def read_scanimage_metadata( fh: FileHandle, / ) -> tuple[dict[str, Any], dict[str, Any], int]: """Read ScanImage BigTIFF v3 or v4 static and ROI metadata from file. The settings can be used to read image and metadata without parsing the TIFF file. Frame data and ROI groups can alternatively be obtained from the Software and Artist tags of any TIFF page. Parameters: fh: Binary file handle to read from. Returns: - Non-varying frame data, parsed with :py:func:`matlabstr2py`. - ROI group data, parsed from JSON. - Version of metadata (3 or 4). Raises: ValueError: File does not contain valid ScanImage metadata. """ fh.seek(0) try: byteorder, version = struct.unpack('<2sH', fh.read(4)) if byteorder != b'II' or version != 43: raise ValueError('not a BigTIFF file') fh.seek(16) magic, version, size0, size1 = struct.unpack(' 1 else {} return frame_data, roi_data, version def read_micromanager_metadata( fh: FileHandle | IO[bytes], /, keys: Container[str] | None = None ) -> dict[str, Any]: """Return Micro-Manager non-TIFF settings from file. The settings can be used to read image data without parsing any TIFF structures. Parameters: fh: Open file handle to Micro-Manager TIFF file. keys: Name of keys to return in result. Returns: Micro-Manager non-TIFF settings, which may contain the following keys: - 'MajorVersion' (str) - 'MinorVersion' (str) - 'Summary' (dict): Specifies the dataset, such as shape, dimensions, and coordinates. - 'IndexMap' (numpy.ndarray): (channel, slice, frame, position, ifd_offset) indices of all frames. - 'DisplaySettings' (list[dict]): Image display settings such as channel contrast and colors. - 'Comments' (dict): User comments. Notes: Summary metadata are the same for all files in a dataset. DisplaySettings metadata are frequently corrupted, and Comments are often empty. The Summary and IndexMap metadata are stored at the beginning of the file, while DisplaySettings and Comments are towards the end. Excluding DisplaySettings and Comments from the results may significantly speed up reading metadata of interest. References: - https://micro-manager.org/Micro-Manager_File_Formats - https://github.com/micro-manager/NDTiffStorage """ if keys is None: keys = {'Summary', 'IndexMap', 'DisplaySettings', 'Comments'} fh.seek(0) try: byteorder = {b'II': '<', b'MM': '>'}[fh.read(2)] fh.seek(8) ( index_header, index_offset, ) = struct.unpack(byteorder + 'II', fh.read(8)) except Exception as exc: raise ValueError('not a Micro-Manager TIFF file') from exc result = {} if index_header == 483729: # NDTiff >= v2 result['MajorVersion'] = index_offset try: ( summary_header, summary_length, ) = struct.unpack(byteorder + 'II', fh.read(8)) if summary_header != 2355492: # NDTiff v3 result['MinorVersion'] = summary_header if summary_length != 2355492: raise ValueError( f'invalid summary_length {summary_length}' ) summary_length = struct.unpack(byteorder + 'I', fh.read(4))[0] if 'Summary' in keys: data = fh.read(summary_length) if len(data) != summary_length: raise ValueError('not enough data') result['Summary'] = json.loads(bytes2str(data, 'utf-8')) except Exception as exc: logger().warning( ' ' f'failed to read NDTiffv{index_offset} summary settings, ' f'raised {exc!r:.128}' ) return result # Micro-Manager multipage TIFF or NDTiff v1 try: ( display_header, display_offset, comments_header, comments_offset, summary_header, summary_length, ) = struct.unpack(byteorder + 'IIIIII', fh.read(24)) except Exception as exc: logger().warning( ' failed to read header, ' f'raised {exc!r:.128}' ) if 'Summary' in keys: try: if summary_header != 2355492: raise ValueError(f'invalid summary_header {summary_header}') data = fh.read(summary_length) if len(data) != summary_length: raise ValueError('not enough data') result['Summary'] = json.loads(bytes2str(data, 'utf-8')) except Exception as exc: logger().warning( ' ' f'failed to read summary settings, raised {exc!r:.128}' ) if 'IndexMap' in keys: try: if index_header != 54773648: raise ValueError(f'invalid index_header {index_header}') fh.seek(index_offset) header, count = struct.unpack(byteorder + 'II', fh.read(8)) if header != 3453623: raise ValueError('invalid header') data = fh.read(count * 20) result['IndexMap'] = numpy.frombuffer( data, byteorder + 'u4', count * 5 ).reshape(-1, 5) except Exception as exc: logger().warning( ' ' f'failed to read index map, raised {exc!r:.128}' ) if 'DisplaySettings' in keys: try: if display_header != 483765892: raise ValueError(f'invalid display_header {display_header}') fh.seek(display_offset) header, count = struct.unpack(byteorder + 'II', fh.read(8)) if header != 347834724: # display_offset might be wrapped at 4 GB fh.seek(display_offset + 2**32) header, count = struct.unpack(byteorder + 'II', fh.read(8)) if header != 347834724: raise ValueError('invalid display header') data = fh.read(count) if len(data) != count: raise ValueError('not enough data') result['DisplaySettings'] = json.loads(bytes2str(data, 'utf-8')) except json.decoder.JSONDecodeError: pass # ignore frequent truncated JSON data except Exception as exc: logger().warning( ' ' f'failed to read display settings, raised {exc!r:.128}' ) result['MajorVersion'] = 0 try: if comments_header == 99384722: # Micro-Manager multipage TIFF if 'Comments' in keys: fh.seek(comments_offset) header, count = struct.unpack(byteorder + 'II', fh.read(8)) if header != 84720485: # comments_offset might be wrapped at 4 GB fh.seek(comments_offset + 2**32) header, count = struct.unpack(byteorder + 'II', fh.read(8)) if header != 84720485: raise ValueError('invalid comments header') data = fh.read(count) if len(data) != count: raise ValueError('not enough data') result['Comments'] = json.loads(bytes2str(data, 'utf-8')) elif comments_header == 483729: # NDTiff v1 result['MajorVersion'] = comments_offset elif comments_header == 0 and comments_offset == 0: pass elif 'Comments' in keys: raise ValueError(f'invalid comments_header {comments_header}') except Exception as exc: logger().warning( ' failed to read comments, ' f'raised {exc!r:.128}' ) return result def read_ndtiff_index( file: str | os.PathLike[Any], / ) -> Iterator[ tuple[dict[str, int | str], str, int, int, int, int, int, int, int, int] ]: """Return iterator over fields in Micro-Manager NDTiff.index file. Parameters: file: Path of NDTiff.index file. Yields: Fields in NDTiff.index file: - axes_dict: Axes indices of frame in image. - filename: Name of file containing frame and metadata. - dataoffset: Offset of frame data in file. - width: Width of frame. - height: Height of frame. - pixeltype: Pixel type. 0: 8-bit monochrome; 1: 16-bit monochrome; 2: 8-bit RGB; 3: 10-bit monochrome; 4: 12-bit monochrome; 5: 14-bit monochrome; 6: 11-bit monochrome. - compression: Pixel compression. 0: Uncompressed. - metaoffset: Offset of JSON metadata in file. - metabytecount: Length of metadata. - metacompression: Metadata compression. 0: Uncompressed. """ with open(file, 'rb') as fh: while True: b = fh.read(4) if len(b) != 4: break k = struct.unpack(' dict[str, str] | None: """Read non-TIFF GDAL structural metadata from file. Return None if the file does not contain valid GDAL structural metadata. The metadata can be used to optimize reading image data from a COG file. """ fh.seek(0) try: if fh.read(2) not in {b'II', b'MM'}: raise ValueError('not a TIFF file') fh.seek({b'*': 8, b'+': 16}[fh.read(1)]) header = fh.read(43).decode() if header[:30] != 'GDAL_STRUCTURAL_METADATA_SIZE=': return None size = int(header[30:36]) lines = fh.read(size).decode() except Exception: return None result: dict[str, Any] = {} try: for line in lines.splitlines(): if '=' in line: key, value = line.split('=', 1) result[key.strip()] = value.strip() except Exception as exc: logger().warning( f' raised {exc!r:.128}' ) return None return result def read_metaseries_catalog(fh: FileHandle | IO[bytes], /) -> None: """Read MetaSeries non-TIFF hint catalog from file. Raise ValueError if the file does not contain a valid hint catalog. """ # TODO: implement read_metaseries_catalog raise NotImplementedError def imagej_metadata_tag( metadata: dict[str, Any], byteorder: ByteOrder, / ) -> tuple[ tuple[int, int, int, bytes, bool], tuple[int, int, int, bytes, bool] ]: """Return IJMetadata and IJMetadataByteCounts tags from metadata dict. Parameters: metadata: May contain the following keys and values: 'Info' (str): Human-readable information as string. 'Labels' (Sequence[str]): Human-readable label for each image. 'Ranges' (Sequence[float]): Lower and upper values for each channel. 'LUTs' (list[numpy.ndarray[(3, 256), 'uint8']]): Color palettes for each channel. 'Plot' (bytes): Undocumented ImageJ internal format. 'ROI', 'Overlays' (bytes): Undocumented ImageJ internal region of interest and overlay format. Can be created with the `roifile `_ package. 'Properties' (dict[str, str]): Map of key, value items. byteorder: Byte order of TIFF file. Returns: IJMetadata and IJMetadataByteCounts tags in :py:meth:`TiffWriter.write` `extratags` format. """ if not metadata: return () # type: ignore[return-value] header_list = [{'>': b'IJIJ', '<': b'JIJI'}[byteorder]] bytecount_list = [0] body_list = [] def _string(data: str, byteorder: ByteOrder, /) -> bytes: return data.encode('utf-16' + {'>': 'be', '<': 'le'}[byteorder]) def _doubles(data: Sequence[float], byteorder: ByteOrder, /) -> bytes: return struct.pack(f'{byteorder}{len(data)}d', *data) def _ndarray(data: NDArray[Any], byteorder: ByteOrder, /) -> bytes: return data.tobytes() def _bytes(data: bytes, byteorder: ByteOrder, /) -> bytes: return data metadata_types: tuple[ tuple[str, bytes, Callable[[Any, ByteOrder], bytes]], ... ] = ( ('Info', b'info', _string), ('Labels', b'labl', _string), ('Ranges', b'rang', _doubles), ('LUTs', b'luts', _ndarray), ('Plot', b'plot', _bytes), ('ROI', b'roi ', _bytes), ('Overlays', b'over', _bytes), ('Properties', b'prop', _string), ) for key, mtype, func in metadata_types: if key.lower() in metadata: key = key.lower() elif key not in metadata: continue if byteorder == '<': mtype = mtype[::-1] values = metadata[key] if isinstance(values, dict): values = [str(i) for item in values.items() for i in item] count = len(values) elif isinstance(values, list): count = len(values) else: values = [values] count = 1 header_list.append(mtype + struct.pack(byteorder + 'I', count)) for value in values: data = func(value, byteorder) body_list.append(data) bytecount_list.append(len(data)) if not body_list: return () # type: ignore[return-value] body = b''.join(body_list) header = b''.join(header_list) data = header + body bytecount_list[0] = len(header) bytecounts = struct.pack( byteorder + ('I' * len(bytecount_list)), *bytecount_list ) return ( (50839, 1, len(data), data, True), (50838, 4, len(bytecounts) // 4, bytecounts, True), ) def imagej_metadata( data: bytes, bytecounts: Sequence[int], byteorder: ByteOrder, / ) -> dict[str, Any]: """Return IJMetadata tag value. Parameters: bytes: Encoded value of IJMetadata tag. bytecounts: Value of IJMetadataByteCounts tag. byteorder: Byte order of TIFF file. Returns: Metadata dict with optional items: 'Info' (str): Human-readable information as string. Some formats, such as OIF or ScanImage, can be parsed into dicts with :py:func:`matlabstr2py` or the `oiffile.SettingsFile()` function of the `oiffile `_ package. 'Labels' (Sequence[str]): Human-readable labels for each channel. 'Ranges' (Sequence[float]): Lower and upper values for each channel. 'LUTs' (list[numpy.ndarray[(3, 256), 'uint8']]): Color palettes for each channel. 'Plot' (bytes): Undocumented ImageJ internal format. 'ROI', 'Overlays' (bytes): Undocumented ImageJ internal region of interest and overlay format. Can be parsed with the `roifile `_ package. 'Properties' (dict[str, str]): Map of key, value items. """ def _string(data: bytes, byteorder: ByteOrder, /) -> str: return data.decode('utf-16' + {'>': 'be', '<': 'le'}[byteorder]) def _doubles(data: bytes, byteorder: ByteOrder, /) -> tuple[float, ...]: return struct.unpack(byteorder + ('d' * (len(data) // 8)), data) def _lut(data: bytes, byteorder: ByteOrder, /) -> NDArray[numpy.uint8]: return numpy.frombuffer(data, numpy.uint8).reshape(-1, 256) def _bytes(data: bytes, byteorder: ByteOrder, /) -> bytes: return data # big-endian metadata_types: dict[ bytes, tuple[str, Callable[[bytes, ByteOrder], Any]] ] = { b'info': ('Info', _string), b'labl': ('Labels', _string), b'rang': ('Ranges', _doubles), b'luts': ('LUTs', _lut), b'plot': ('Plot', _bytes), b'roi ': ('ROI', _bytes), b'over': ('Overlays', _bytes), b'prop': ('Properties', _string), } # little-endian metadata_types.update({k[::-1]: v for k, v in metadata_types.items()}) if len(bytecounts) == 0: raise ValueError('no ImageJ metadata') if data[:4] not in {b'IJIJ', b'JIJI'}: raise ValueError('invalid ImageJ metadata') header_size = bytecounts[0] if header_size < 12 or header_size > 804: raise ValueError('invalid ImageJ metadata header size') ntypes = (header_size - 4) // 8 header = struct.unpack( byteorder + '4sI' * ntypes, data[4 : 4 + ntypes * 8] ) pos = 4 + ntypes * 8 counter = 0 result = {} for mtype, count in zip(header[::2], header[1::2]): values = [] name, func = metadata_types.get(mtype, (bytes2str(mtype), _bytes)) for _ in range(count): counter += 1 pos1 = pos + bytecounts[counter] values.append(func(data[pos:pos1], byteorder)) pos = pos1 result[name.strip()] = values[0] if count == 1 else values prop = result.get('Properties') if prop and len(prop) % 2 == 0: result['Properties'] = dict( prop[i : i + 2] for i in range(0, len(prop), 2) ) return result def imagej_description_metadata(description: str, /) -> dict[str, Any]: r"""Return metatata from ImageJ image description. Raise ValueError if not a valid ImageJ description. >>> description = 'ImageJ=1.11a\nimages=510\nhyperstack=true\n' >>> imagej_description_metadata(description) # doctest: +SKIP {'ImageJ': '1.11a', 'images': 510, 'hyperstack': True} """ def _bool(val: str, /) -> bool: return {'true': True, 'false': False}[val.lower()] result: dict[str, Any] = {} for line in description.splitlines(): try: key, val = line.split('=') except Exception: continue key = key.strip() val = val.strip() for dtype in (int, float, _bool): try: val = dtype(val) # type: ignore[assignment] break except Exception: pass result[key] = val if 'ImageJ' not in result and 'SCIFIO' not in result: raise ValueError(f'not an ImageJ image description {result!r}') return result def imagej_description( shape: Sequence[int], /, axes: str | None = None, rgb: bool | None = None, colormaped: bool = False, **metadata: Any, # TODO: use TypedDict ) -> str: """Return ImageJ image description from data shape and metadata. Parameters: shape: Shape of image array. axes: Character codes for dimensions in `shape`. ImageJ can handle up to 6 dimensions in order TZCYXS. `Axes` and `shape` are used to determine the images, channels, slices, and frames entries of the image description. rgb: Image is RGB type. colormaped: Image is indexed color. **metadata: Additional items to be included in image description: hyperstack (bool): Image is a hyperstack. The default is True unless `colormapped` is true. mode (str): Display mode: 'grayscale', 'composite', or 'color'. The default is 'grayscale' unless `rgb` or `colormaped` are true. Ignored if `hyperstack` is false. loop (bool): Loop frames back and forth. The default is False. finterval (float): Frame interval in seconds. fps (float): Frames per seconds. The inverse of `finterval`. spacing (float): Voxel spacing in `unit` units. unit (str): Unit for `spacing` and X/YResolution tags. Usually 'um' (micrometer) or 'pixel'. xorigin, yorigin, zorigin (float): X, Y, and Z origins in pixel units. version (str): ImageJ version string. The default is '1.11a'. images, channels, slices, frames (int): Ignored. Examples: >>> imagej_description((51, 5, 2, 196, 171)) # doctest: +SKIP ImageJ=1.11a images=510 channels=2 slices=5 frames=51 hyperstack=true mode=grayscale loop=false """ mode = metadata.pop('mode', None) hyperstack = metadata.pop('hyperstack', None) loop = metadata.pop('loop', None) version = metadata.pop('ImageJ', '1.11a') if colormaped: hyperstack = False rgb = False shape = imagej_shape(shape, rgb=rgb, axes=axes) rgb = shape[-1] in {3, 4} append = [] result = [f'ImageJ={version}'] result.append(f'images={product(shape[:-3])}') if hyperstack is None: hyperstack = True append.append('hyperstack=true') else: append.append(f'hyperstack={bool(hyperstack)}'.lower()) if shape[2] > 1: result.append(f'channels={shape[2]}') if mode is None and not rgb and not colormaped: mode = 'grayscale' if hyperstack and mode: append.append(f'mode={mode}') if shape[1] > 1: result.append(f'slices={shape[1]}') if shape[0] > 1: result.append(f'frames={shape[0]}') if loop is None: append.append('loop=false') if loop is not None: append.append(f'loop={bool(loop)}'.lower()) for key, value in metadata.items(): if key not in {'images', 'channels', 'slices', 'frames', 'SCIFIO'}: if isinstance(value, bool): value = str(value).lower() append.append(f'{key.lower()}={value}') return '\n'.join(result + append + ['']) def imagej_shape( shape: Sequence[int], /, *, rgb: bool | None = None, axes: str | None = None, ) -> tuple[int, ...]: """Return shape normalized to 6D ImageJ hyperstack TZCYXS. Raise ValueError if not a valid ImageJ hyperstack shape or axes order. >>> imagej_shape((2, 3, 4, 5, 3), rgb=False) (2, 3, 4, 5, 3, 1) """ shape = tuple(int(i) for i in shape) ndim = len(shape) if 1 > ndim > 6: raise ValueError('ImageJ hyperstack must be 2-6 dimensional') if axes: if len(axes) != ndim: raise ValueError('ImageJ hyperstack shape and axes do not match') i = 0 axes = axes.upper() for ax in axes: j = 'TZCYXS'.find(ax) if j < i: raise ValueError( 'ImageJ hyperstack axes must be in TZCYXS order' ) i = j ndims = len(axes) newshape = [] i = 0 for ax in 'TZCYXS': if i < ndims and ax == axes[i]: newshape.append(shape[i]) i += 1 else: newshape.append(1) if newshape[-1] not in {1, 3, 4}: raise ValueError( 'ImageJ hyperstack must contain 1, 3, or 4 samples' ) return tuple(newshape) if rgb is None: rgb = shape[-1] in {3, 4} and ndim > 2 if rgb and shape[-1] not in {3, 4}: raise ValueError('ImageJ hyperstack is not a RGB image') if not rgb and ndim == 6 and shape[-1] != 1: raise ValueError('ImageJ hyperstack is not a grayscale image') if rgb or shape[-1] == 1: return (1,) * (6 - ndim) + shape return (1,) * (5 - ndim) + shape + (1,) def jpeg_decode_colorspace( photometric: int, planarconfig: int, extrasamples: tuple[int, ...], jfif: bool, /, ) -> tuple[int | None, int | str | None]: """Return JPEG and output color space for `jpeg_decode` function.""" colorspace: int | None = None outcolorspace: int | str | None = None if extrasamples: pass elif photometric == 6: # YCBCR -> RGB outcolorspace = 2 # RGB elif photometric == 2: # RGB -> RGB if not jfif: # found in Aperio SVS colorspace = 2 outcolorspace = 2 elif photometric == 5: # CMYK outcolorspace = 4 elif photometric > 3: outcolorspace = PHOTOMETRIC(photometric).name if planarconfig != 1: outcolorspace = 1 # decode separate planes to grayscale return colorspace, outcolorspace def jpeg_shape(jpeg: bytes, /) -> tuple[int, int, int, int]: """Return bitdepth and shape of JPEG image.""" i = 0 while i < len(jpeg): marker = struct.unpack('>H', jpeg[i : i + 2])[0] i += 2 if marker == 0xFFD8: # start of image continue if marker == 0xFFD9: # end of image break if 0xFFD0 <= marker <= 0xFFD7: # restart marker continue if marker == 0xFF01: # private marker continue length = struct.unpack('>H', jpeg[i : i + 2])[0] i += 2 if 0xFFC0 <= marker <= 0xFFC3: # start of frame return struct.unpack('>BHHB', jpeg[i : i + 6]) if marker == 0xFFDA: # start of scan break # skip to next marker i += length - 2 raise ValueError('no SOF marker found') def ndpi_jpeg_tile(jpeg: bytes, /) -> tuple[int, int, bytes]: """Return tile shape and JPEG header from JPEG with restart markers.""" marker: int length: int factor: int ncomponents: int restartinterval: int = 0 sofoffset: int = 0 sosoffset: int = 0 i: int = 0 while i < len(jpeg): marker = struct.unpack('>H', jpeg[i : i + 2])[0] i += 2 if marker == 0xFFD8: # start of image continue if marker == 0xFFD9: # end of image break if 0xFFD0 <= marker <= 0xFFD7: # restart marker continue if marker == 0xFF01: # private marker continue length = struct.unpack('>H', jpeg[i : i + 2])[0] i += 2 if marker == 0xFFDD: # define restart interval restartinterval = struct.unpack('>H', jpeg[i : i + 2])[0] elif marker == 0xFFC0: # start of frame sofoffset = i + 1 precision, imlength, imwidth, ncomponents = struct.unpack( '>BHHB', jpeg[i : i + 6] ) i += 6 mcuwidth = 1 mcuheight = 1 for _ in range(ncomponents): cid, factor, table = struct.unpack('>BBB', jpeg[i : i + 3]) i += 3 if factor >> 4 > mcuwidth: mcuwidth = factor >> 4 if factor & 0b00001111 > mcuheight: mcuheight = factor & 0b00001111 mcuwidth *= 8 mcuheight *= 8 i = sofoffset - 1 elif marker == 0xFFDA: # start of scan sosoffset = i + length - 2 break # skip to next marker i += length - 2 if restartinterval == 0 or sofoffset == 0 or sosoffset == 0: raise ValueError('missing required JPEG markers') # patch jpeg header for tile size tilelength = mcuheight tilewidth = restartinterval * mcuwidth jpegheader = ( jpeg[:sofoffset] + struct.pack('>HH', tilelength, tilewidth) + jpeg[sofoffset + 4 : sosoffset] ) return tilelength, tilewidth, jpegheader def shaped_description(shape: Sequence[int], /, **metadata: Any) -> str: """Return JSON image description from data shape and other metadata. Return UTF-8 encoded JSON. >>> shaped_description((256, 256, 3), axes='YXS') # doctest: +SKIP '{"shape": [256, 256, 3], "axes": "YXS"}' """ metadata.update(shape=shape) return json.dumps(metadata) # .encode() def shaped_description_metadata(description: str, /) -> dict[str, Any]: """Return metatata from JSON formatted image description. Raise ValueError if `description` is of unknown format. >>> description = '{"shape": [256, 256, 3], "axes": "YXS"}' >>> shaped_description_metadata(description) # doctest: +SKIP {'shape': [256, 256, 3], 'axes': 'YXS'} >>> shaped_description_metadata('shape=(256, 256, 3)') {'shape': (256, 256, 3)} """ if description[:6] == 'shape=': # old-style 'shaped' description; not JSON shape = tuple(int(i) for i in description[7:-1].split(',')) return {'shape': shape} if description[:1] == '{' and description[-1:] == '}': # JSON description return json.loads(description) raise ValueError('invalid JSON image description', description) def fluoview_description_metadata( description: str, /, ignoresections: Container[str] | None = None, ) -> dict[str, Any]: r"""Return metatata from FluoView image description. The FluoView image description format is unspecified. Expect failures. >>> descr = ( ... '[Intensity Mapping]\nMap Ch0: Range=00000 to 02047\n' ... '[Intensity Mapping End]' ... ) >>> fluoview_description_metadata(descr) {'Intensity Mapping': {'Map Ch0: Range': '00000 to 02047'}} """ if not description.startswith('['): raise ValueError('invalid FluoView image description') if ignoresections is None: ignoresections = {'Region Info (Fields)', 'Protocol Description'} section: Any result: dict[str, Any] = {} sections = [result] comment = False for line in description.splitlines(): if not comment: line = line.strip() if not line: continue if line[0] == '[': if line[-5:] == ' End]': # close section del sections[-1] section = sections[-1] name = line[1:-5] if comment: section[name] = '\n'.join(section[name]) if name[:4] == 'LUT ': a = numpy.array(section[name], dtype=numpy.uint8) a.shape = -1, 3 section[name] = a continue # new section comment = False name = line[1:-1] if name[:4] == 'LUT ': section = [] elif name in ignoresections: section = [] comment = True else: section = {} sections.append(section) result[name] = section continue # add entry if comment: section.append(line) continue lines = line.split('=', 1) if len(line) == 1: section[lines[0].strip()] = None continue key, value = lines if key[:4] == 'RGB ': section.extend(int(rgb) for rgb in value.split()) else: section[key.strip()] = astype(value.strip()) return result def pilatus_description_metadata(description: str, /) -> dict[str, Any]: """Return metatata from Pilatus image description. Return metadata from Pilatus pixel array detectors by Dectris, created by camserver or TVX software. >>> pilatus_description_metadata('# Pixel_size 172e-6 m x 172e-6 m') {'Pixel_size': (0.000172, 0.000172)} """ result: dict[str, Any] = {} values: Any if not description.startswith('# '): return result for c in '#:=,()': description = description.replace(c, ' ') for lines in description.split('\n'): if lines[:2] != ' ': continue line = lines.split() name = line[0] if line[0] not in TIFF.PILATUS_HEADER: try: result['DateTime'] = strptime( ' '.join(line), '%Y-%m-%dT%H %M %S.%f' ) except Exception: result[name] = ' '.join(line[1:]) continue indices, dtype = TIFF.PILATUS_HEADER[line[0]] if isinstance(indices[0], slice): # assumes one slice values = line[indices[0]] else: values = [line[i] for i in indices] if dtype is float and values[0] == 'not': values = ['NaN'] values = tuple(dtype(v) for v in values) if dtype is str: values = ' '.join(values) elif len(values) == 1: values = values[0] result[name] = values return result def svs_description_metadata(description: str, /) -> dict[str, Any]: """Return metatata from Aperio image description. The Aperio image description format is unspecified. Expect failures. >>> svs_description_metadata('Aperio Image Library v1.0') {'Header': 'Aperio Image Library v1.0'} """ if not description.startswith('Aperio '): raise ValueError('invalid Aperio image description') result = {} items = description.split('|') result['Header'] = items[0] for item in items[1:]: key, value = item.split('=', maxsplit=1) result[key.strip()] = astype(value.strip()) return result def stk_description_metadata(description: str, /) -> list[dict[str, Any]]: """Return metadata from MetaMorph image description. The MetaMorph image description format is unspecified. Expect failures. """ description = description.strip() if not description: return [] # try: # description = bytes2str(description) # except UnicodeDecodeError as exc: # logger().warning( # ' raised {exc!r:.128}' # ) # return [] result = [] for plane in description.split('\x00'): d = {} for line in plane.split('\r\n'): lines = line.split(':', 1) if len(lines) > 1: name, value = lines d[name.strip()] = astype(value.strip()) else: value = lines[0].strip() if value: if '' in d: d[''].append(value) else: d[''] = [value] result.append(d) return result def metaseries_description_metadata(description: str, /) -> dict[str, Any]: """Return metatata from MetaSeries image description.""" if not description.startswith(''): raise ValueError('invalid MetaSeries image description') import uuid from xml.etree import ElementTree as etree root = etree.fromstring(description) types: dict[str, Callable[..., Any]] = { 'float': float, 'int': int, 'bool': lambda x: asbool(x, 'on', 'off'), 'time': lambda x: strptime(x, '%Y%m%d %H:%M:%S.%f'), 'guid': uuid.UUID, # 'float-array': # 'colorref': } def parse( root: etree.Element, result: dict[str, Any], / ) -> dict[str, Any]: # recursive for child in root: attrib = child.attrib if not attrib: result[child.tag] = parse(child, {}) continue if 'id' in attrib: i = attrib['id'] t = attrib['type'] v = attrib['value'] if t in types: try: result[i] = types[t](v) except Exception: result[i] = v else: result[i] = v return result adict = parse(root, {}) if 'Description' in adict: adict['Description'] = adict['Description'].replace(' ', '\n') return adict def scanimage_description_metadata(description: str, /) -> Any: """Return metatata from ScanImage image description.""" return matlabstr2py(description) def scanimage_artist_metadata(artist: str, /) -> dict[str, Any] | None: """Return metatata from ScanImage artist tag.""" try: return json.loads(artist) except ValueError as exc: logger().warning( f' raised {exc!r:.128}' ) return None def olympusini_metadata(inistr: str, /) -> dict[str, Any]: """Return OlympusSIS metadata from INI string. No specification is available. """ def keyindex(key: str, /) -> tuple[str, int]: # split key into name and index index = 0 i = len(key.rstrip('0123456789')) if i < len(key): index = int(key[i:]) - 1 key = key[:i] return key, index result: dict[str, Any] = {} bands: list[dict[str, Any]] = [] value: Any zpos: list[Any] | None = None tpos: list[Any] | None = None for line in inistr.splitlines(): line = line.strip() if line == '' or line[0] == ';': continue if line[0] == '[' and line[-1] == ']': section_name = line[1:-1] result[section_name] = section = {} if section_name == 'Dimension': result['axes'] = axes = [] result['shape'] = shape = [] elif section_name == 'ASD': result[section_name] = [] elif section_name == 'Z': if 'Dimension' in result: result[section_name]['ZPos'] = zpos = [] elif section_name == 'Time': if 'Dimension' in result: result[section_name]['TimePos'] = tpos = [] elif section_name == 'Band': nbands = result['Dimension']['Band'] bands = [{'LUT': []} for _ in range(nbands)] result[section_name] = bands iband = 0 else: key, value = line.split('=') if value.strip() == '': value = None elif ',' in value: value = tuple(astype(v) for v in value.split(',')) else: value = astype(value) if section_name == 'Dimension': section[key] = value axes.append(key) shape.append(value) elif section_name == 'ASD': if key == 'Count': result['ASD'] = [{}] * value else: key, index = keyindex(key) result['ASD'][index][key] = value elif section_name == 'Band': if key[:3] == 'LUT': lut = bands[iband]['LUT'] value = struct.pack(' 1: axes.append(sisaxes.get(x, x[0].upper())) shape.append(i) result['axes'] = ''.join(axes) result['shape'] = tuple(shape) try: result['Z']['ZPos'] = numpy.array( result['Z']['ZPos'][: result['Dimension']['Z']], numpy.float64 ) except Exception: pass try: result['Time']['TimePos'] = numpy.array( result['Time']['TimePos'][: result['Dimension']['Time']], numpy.int32, ) except Exception: pass for band in bands: band['LUT'] = numpy.array(band['LUT'], numpy.uint8) return result def astrotiff_description_metadata( description: str, /, sep: str = ':' ) -> dict[str, Any]: """Return metatata from AstroTIFF image description.""" logmsg = ' ' counts: dict[str, int] = {} result: dict[str, Any] = {} value: Any for line in description.splitlines(): line = line.strip() if not line: continue key = line[:8].strip() value = line[8:] if not value.startswith('='): # for example, COMMENT or HISTORY if key + f'{sep}0' not in result: result[key + f'{sep}0'] = value counts[key] = 1 else: result[key + f'{sep}{counts[key]}'] = value counts[key] += 1 continue value = value[1:] if '/' in value: value, comment = value.split('/', 1) comment = comment.strip() else: comment = '' value = value.strip() if not value: # undefined value = None elif value[0] == "'": # string if len(value) < 2: logger().warning(logmsg + f'{key}: invalid string {value!r}') continue if value[-1] == "'": value = value[1:-1] else: # string containing '/' if not ("'" in comment and '/' in comment): logger().warning( logmsg + f'{key}: invalid string {value!r}' ) continue value, comment = line[9:].strip()[1:].split("'", 1) comment = comment.split('/', 1)[-1].strip() # TODO: string containing single quote ' elif value[0] == '(' and value[-1] == ')': # complex number value = value[1:-1] dtype = float if '.' in value else int value = tuple(dtype(v.strip()) for v in value.split(',')) elif value == 'T': value = True elif value == 'F': value = False elif '.' in value: value = float(value) else: try: value = int(value) except Exception: logger().warning(logmsg + f'{key}: invalid value {value!r}') continue if key in result: logger().warning(logmsg + f'{key}: duplicate key') result[key] = value if comment: result[key + f'{sep}COMMENT'] = comment if comment[0] == '[' and ']' in comment: result[key + f'{sep}UNIT'] = comment[1:].split(']', 1)[0] return result def streak_description_metadata( description: str, fh: FileHandle, / ) -> dict[str, Any]: """Return metatata from Hamamatsu streak image description.""" section_pattern = re.compile( r'\[([a-zA-Z0-9 _\-\.]+)\],([^\[]*)', re.DOTALL ) properties_pattern = re.compile( r'([a-zA-Z0-9 _\-\.]+)=(\"[^\"]*\"|[\+\-0-9\.]+|[^,]*)' ) result: dict[str, Any] = {} for section, values in section_pattern.findall(description.strip()): properties = {} for key, value in properties_pattern.findall(values): value = value.strip() if not value or value == '"': value = None elif value[0] == '"' and value[-1] == '"': value = value[1:-1] if ',' in value: try: value = tuple( ( float(v) if '.' in value else int(v[1:] if v[0] == '#' else v) ) for v in value.split(',') ) except ValueError: pass elif '.' in value: try: value = float(value) except ValueError: pass else: try: value = int(value) except ValueError: pass properties[key] = value result[section] = properties if not fh.closed: pos = fh.tell() for scaling in ('ScalingXScaling', 'ScalingYScaling'): try: offset, count = result['Scaling'][scaling + 'File'] fh.seek(offset) result['Scaling'][scaling] = fh.read_array( dtype=' NDArray[Any]: """Return array from bytes containing packed samples. Use to unpack RGB565 or RGB555 to RGB888 format. Works on little-endian platforms only. Parameters: data: Bytes to be decoded. Samples in each pixel are stored consecutively. Pixels are aligned to 8, 16, or 32 bit boundaries. dtype: Data type of samples. The byte order applies also to the data stream. bitspersample: Number of bits for each sample in pixel. rescale: Upscale samples to number of bits in dtype. Returns: Flattened array of unpacked samples of native dtype. Examples: >>> data = struct.pack('BBBB', 0x21, 0x08, 0xFF, 0xFF) >>> print(unpack_rgb(data, '>> print(unpack_rgb(data, '>> print(unpack_rgb(data, '= bits) data_array = numpy.frombuffer(data, dtype.byteorder + dt) result = numpy.empty((data_array.size, len(bitspersample)), dtype.char) for i, bps in enumerate(bitspersample): t = data_array >> int(numpy.sum(bitspersample[i + 1 :])) t &= int('0b' + '1' * bps, 2) if rescale: o = ((dtype.itemsize * 8) // bps + 1) * bps if o > data_array.dtype.itemsize * 8: t = t.astype('I') t *= (2**o - 1) // (2**bps - 1) t //= 2 ** (o - (dtype.itemsize * 8)) result[:, i] = t return result.reshape(-1) def apply_colormap( image: NDArray[Any], colormap: NDArray[Any], /, contig: bool = True ) -> NDArray[Any]: """Return palette-colored image. The image array values are used to index the colormap on axis 1. The returned image array is of shape `image.shape+colormap.shape[0]` and dtype `colormap.dtype`. Parameters: image: Array of indices into colormap. colormap: RGB lookup table aka palette of shape `(3, 2**bitspersample)`. contig: Return contiguous array. Examples: >>> im = numpy.arange(256, dtype='uint8') >>> colormap = numpy.vstack([im, im, im]).astype('uint16') * 256 >>> apply_colormap(im, colormap)[-1] array([65280, 65280, 65280], dtype=uint16) """ image = numpy.take(colormap, image, axis=1) image = numpy.rollaxis(image, 0, image.ndim) if contig: image = numpy.ascontiguousarray(image) return image def parse_filenames( files: Sequence[str], /, pattern: str | None = None, axesorder: Sequence[int] | None = None, categories: dict[str, dict[str, int]] | None = None, *, _shape: Sequence[int] | None = None, ) -> tuple[ tuple[str, ...], tuple[int, ...], list[tuple[int, ...]], Sequence[str] ]: r"""Return shape and axes from sequence of file names matching pattern. Parameters: files: Sequence of file names to parse. pattern: Regular expression pattern matching axes names and chunk indices in file names. By default, no pattern matching is performed. Axes names can be specified by matching groups preceding the index groups in the file name, be provided as group names for the index groups, or be omitted. The predefined 'axes' pattern matches Olympus OIF and Leica TIFF series. axesorder: Indices of axes in pattern. By default, axes are returned in the order they appear in pattern. categories: Map of index group matches to integer indices. `{'axislabel': {'category': index}}` _shape: Shape of file sequence. The default is `maximum - minimum + 1` of the parsed indices for each dimension. Returns: - Axes names for each dimension. - Shape of file series. - Index of each file in shape. - Filtered sequence of file names. Examples: >>> parse_filenames( ... ['c1001.ext', 'c2002.ext'], r'([^\d])(\d)(?P\d+)\.ext' ... ) (('c', 't'), (2, 2), [(0, 0), (1, 1)], ['c1001.ext', 'c2002.ext']) """ # TODO: add option to filter files that do not match pattern shape = None if _shape is None else tuple(_shape) if pattern is None: if shape is not None and (len(shape) != 1 or shape[0] < len(files)): raise ValueError( f'shape {(len(files),)} does not fit provided shape {shape}' ) return ( ('I',), (len(files),), [(i,) for i in range(len(files))], files, ) pattern = TIFF.FILE_PATTERNS.get(pattern, pattern) if not pattern: raise ValueError('invalid pattern') pattern_compiled: Any if isinstance(pattern, str): pattern_compiled = re.compile(pattern) elif hasattr(pattern, 'groupindex'): pattern_compiled = pattern else: raise ValueError('invalid pattern') if categories is None: categories = {} def parse(fname: str, /) -> tuple[tuple[str, ...], tuple[int, ...]]: # return axes names and indices from file name assert categories is not None dims: list[str] = [] indices: list[int] = [] groupindex = {v: k for k, v in pattern_compiled.groupindex.items()} match = pattern_compiled.search(fname) if match is None: raise ValueError(f'pattern does not match file name {fname!r}') ax = None for i, m in enumerate(match.groups()): if m is None: continue if i + 1 in groupindex: ax = groupindex[i + 1] elif m[0].isalpha(): ax = m # axis label for next index continue if ax is None: ax = 'Q' # no preceding axis letter try: if ax in categories: m = categories[ax][m] m = int(m) except Exception as exc: raise ValueError(f'invalid index {m!r}') from exc indices.append(m) dims.append(ax) ax = None return tuple(dims), tuple(indices) normpaths = [os.path.normpath(f) for f in files] if len(normpaths) == 1: prefix_str = os.path.dirname(normpaths[0]) else: prefix_str = os.path.commonpath(normpaths) prefix = len(prefix_str) dims: tuple[str, ...] | None = None indices: list[tuple[int, ...]] = [] for fname in normpaths: lbl, idx = parse(fname[prefix:]) if dims is None: dims = lbl if axesorder is not None and ( len(axesorder) != len(dims) or any(i not in axesorder for i in range(len(dims))) ): raise ValueError( f'invalid axesorder {axesorder!r} for {dims!r}' ) elif dims != lbl: raise ValueError('dims do not match within image sequence') if axesorder is not None: idx = tuple(idx[i] for i in axesorder) indices.append(idx) assert dims is not None if axesorder is not None: dims = tuple(dims[i] for i in axesorder) # determine shape indices_array = numpy.array(indices, dtype=numpy.intp) parsedshape = numpy.max(indices, axis=0) if shape is None: startindex = numpy.min(indices_array, axis=0) indices_array -= startindex parsedshape -= startindex parsedshape += 1 shape = tuple(int(i) for i in parsedshape.tolist()) elif len(parsedshape) != len(shape) or any( i > j for i, j in zip(shape, parsedshape) ): raise ValueError( f'parsed shape {parsedshape} does not fit provided shape {shape}' ) indices_list: list[list[int]] indices_list = indices_array.tolist() # type: ignore[assignment] indices = [tuple(index) for index in indices_list] return dims, shape, indices, files def iter_images(data: NDArray[Any], /) -> Iterator[NDArray[Any]]: """Return iterator over pages in data array of normalized shape.""" yield from data def iter_strips( pageiter: Iterator[NDArray[Any] | None], shape: tuple[int, ...], dtype: numpy.dtype[Any], rowsperstrip: int, /, ) -> Iterator[NDArray[Any]]: """Return iterator over strips in pages.""" numstrips = (shape[-3] + rowsperstrip - 1) // rowsperstrip for iteritem in pageiter: if iteritem is None: # for _ in range(numstrips): # yield None # continue pagedata = numpy.zeros(shape, dtype) else: pagedata = iteritem.reshape(shape) for plane in pagedata: for depth in plane: for i in range(numstrips): yield depth[i * rowsperstrip : (i + 1) * rowsperstrip] def iter_tiles( data: NDArray[Any], tile: tuple[int, ...], tiles: tuple[int, ...], /, ) -> Iterator[NDArray[Any]]: """Return iterator over full tiles in data array of normalized shape. Tiles are zero-padded if necessary. """ if not 1 < len(tile) < 4 or len(tile) != len(tiles): raise ValueError('invalid tile or tiles shape') chunkshape = tile + (data.shape[-1],) chunksize = product(chunkshape) dtype = data.dtype sz, sy, sx = data.shape[2:5] if len(tile) == 2: y, x = tile for page in data: for plane in page: for ty in range(tiles[0]): ty *= y cy = min(y, sy - ty) for tx in range(tiles[1]): tx *= x cx = min(x, sx - tx) chunk = plane[0, ty : ty + cy, tx : tx + cx] if chunk.size != chunksize: chunk_ = numpy.zeros(chunkshape, dtype) chunk_[:cy, :cx] = chunk chunk = chunk_ yield chunk else: z, y, x = tile for page in data: for plane in page: for tz in range(tiles[0]): tz *= z cz = min(z, sz - tz) for ty in range(tiles[1]): ty *= y cy = min(y, sy - ty) for tx in range(tiles[2]): tx *= x cx = min(x, sx - tx) chunk = plane[ tz : tz + cz, ty : ty + cy, tx : tx + cx ] if chunk.size != chunksize: chunk_ = numpy.zeros(chunkshape, dtype) chunk_[:cz, :cy, :cx] = chunk chunk = chunk_ yield chunk[0] if z == 1 else chunk def encode_chunks( numchunks: int, chunkiter: Iterator[NDArray[Any] | None], encode: Callable[[NDArray[Any]], bytes], shape: Sequence[int], dtype: numpy.dtype[Any], maxworkers: int | None, buffersize: int | None, istile: bool, /, ) -> Iterator[bytes]: """Return iterator over encoded chunks.""" if numchunks <= 0: return chunksize = product(shape) * dtype.itemsize if istile: # pad tiles def func(chunk: NDArray[Any] | None, /) -> bytes: if chunk is None: return b'' chunk = numpy.ascontiguousarray(chunk, dtype) if chunk.nbytes != chunksize: # if chunk.dtype != dtype: # raise ValueError('dtype of chunk does not match data') pad = tuple((0, i - j) for i, j in zip(shape, chunk.shape)) chunk = numpy.pad(chunk, pad) return encode(chunk) else: # strips def func(chunk: NDArray[Any] | None, /) -> bytes: if chunk is None: return b'' chunk = numpy.ascontiguousarray(chunk, dtype) return encode(chunk) if maxworkers is None or maxworkers < 2 or numchunks < 2: for _ in range(numchunks): chunk = next(chunkiter) # assert chunk is None or isinstance(chunk, numpy.ndarray) yield func(chunk) del chunk return # because ThreadPoolExecutor.map is not collecting items lazily, reduce # memory overhead by processing chunks iterator maxchunks items at a time if buffersize is None: buffersize = TIFF.BUFFERSIZE * 2 maxchunks = max(maxworkers, buffersize // chunksize) if numchunks <= maxchunks: def chunks() -> Iterator[NDArray[Any] | None]: for _ in range(numchunks): chunk = next(chunkiter) # assert chunk is None or isinstance(chunk, numpy.ndarray) yield chunk del chunk with ThreadPoolExecutor(maxworkers) as executor: yield from executor.map(func, chunks()) return with ThreadPoolExecutor(maxworkers) as executor: count = 1 chunk_list = [] for _ in range(numchunks): chunk = next(chunkiter) if chunk is not None: count += 1 # assert chunk is None or isinstance(chunk, numpy.ndarray) chunk_list.append(chunk) if count == maxchunks: yield from executor.map(func, chunk_list) chunk_list.clear() count = 0 if chunk_list: yield from executor.map(func, chunk_list) def zarr_selection( store: ZarrStore, selection: Any, /, *, groupindex: int | None = None, close: bool = True, out: OutputType = None, ) -> NDArray[Any]: """Return selection from Zarr 2 store. Parameters: store: ZarrStore instance to read selection from. selection: Subset of image to be extracted and returned. Refer to the Zarr 2 documentation for valid selections. groupindex: Index of array if store is Zarr 2 group. close: Close store before returning. out: Specifies how image array is returned. By default, create a new array. If a *numpy.ndarray*, a writable array to which the images are copied. If *'memmap'*, create a memory-mapped array in a temporary file. If a *string* or *open file*, the file used to create a memory-mapped array. """ import zarr z = zarr.open(store, mode='r') try: if isinstance(z, zarr.hierarchy.Group): if groupindex is None: groupindex = 0 z = z[groupindex] if out is not None: shape = zarr.indexing.BasicIndexer(selection, z).shape out = create_output(out, shape, z.dtype) result = z.get_basic_selection(selection, out=out) finally: if close: store.close() return result def reorient( image: NDArray[Any], orientation: ORIENTATION | int | str, / ) -> NDArray[Any]: """Return reoriented view of image array. Parameters: image: Non-squeezed output of `asarray` functions. Axes -3 and -2 must be image length and width respectively. orientation: Value of Orientation tag. """ orientation = cast(ORIENTATION, enumarg(ORIENTATION, orientation)) if orientation == ORIENTATION.TOPLEFT: return image if orientation == ORIENTATION.TOPRIGHT: return image[..., ::-1, :] if orientation == ORIENTATION.BOTLEFT: return image[..., ::-1, :, :] if orientation == ORIENTATION.BOTRIGHT: return image[..., ::-1, ::-1, :] if orientation == ORIENTATION.LEFTTOP: return numpy.swapaxes(image, -3, -2) if orientation == ORIENTATION.RIGHTTOP: return numpy.swapaxes(image, -3, -2)[..., ::-1, :] if orientation == ORIENTATION.RIGHTBOT: return numpy.swapaxes(image, -3, -2)[..., ::-1, :, :] if orientation == ORIENTATION.LEFTBOT: return numpy.swapaxes(image, -3, -2)[..., ::-1, ::-1, :] return image def repeat_nd(a: ArrayLike, repeats: Sequence[int], /) -> NDArray[Any]: """Return read-only view into input array with elements repeated. Zoom image array by integer factors using nearest neighbor interpolation (box filter). Parameters: a: Input array. repeats: Number of repetitions to apply along each dimension of input. Examples: >>> repeat_nd([[1, 2], [3, 4]], (2, 2)) array([[1, 1, 2, 2], [1, 1, 2, 2], [3, 3, 4, 4], [3, 3, 4, 4]]) """ reshape: list[int] = [] shape: list[int] = [] strides: list[int] = [] a = numpy.asarray(a) for i, j, k in zip(a.strides, a.shape, repeats): shape.extend((j, k)) strides.extend((i, 0)) reshape.append(j * k) return numpy.lib.stride_tricks.as_strided( a, shape, strides, writeable=False ).reshape(reshape) @overload def reshape_nd( data_or_shape: tuple[int, ...], ndim: int, / ) -> tuple[int, ...]: ... @overload def reshape_nd(data_or_shape: NDArray[Any], ndim: int, /) -> NDArray[Any]: ... def reshape_nd( data_or_shape: tuple[int, ...] | NDArray[Any], ndim: int, / ) -> tuple[int, ...] | NDArray[Any]: """Return image array or shape with at least `ndim` dimensions. Prepend 1s to image shape as necessary. >>> reshape_nd(numpy.empty(0), 1).shape (0,) >>> reshape_nd(numpy.empty(1), 2).shape (1, 1) >>> reshape_nd(numpy.empty((2, 3)), 3).shape (1, 2, 3) >>> reshape_nd(numpy.empty((3, 4, 5)), 3).shape (3, 4, 5) >>> reshape_nd((2, 3), 3) (1, 2, 3) """ if isinstance(data_or_shape, tuple): shape = data_or_shape else: shape = data_or_shape.shape if len(shape) >= ndim: return data_or_shape shape = (1,) * (ndim - len(shape)) + shape if isinstance(data_or_shape, tuple): return shape return data_or_shape.reshape(shape) @overload def squeeze_axes( shape: Sequence[int], axes: str, /, skip: str | None = None, ) -> tuple[tuple[int, ...], str, tuple[bool, ...]]: ... @overload def squeeze_axes( shape: Sequence[int], axes: Sequence[str], /, skip: Sequence[str] | None = None, ) -> tuple[tuple[int, ...], Sequence[str], tuple[bool, ...]]: ... def squeeze_axes( shape: Sequence[int], axes: str | Sequence[str], /, skip: str | Sequence[str] | None = None, ) -> tuple[tuple[int, ...], str | Sequence[str], tuple[bool, ...]]: """Return shape and axes with length-1 dimensions removed. Remove unused dimensions unless their axes are listed in `skip`. Parameters: shape: Sequence of dimension sizes. axes: Character codes for dimensions in `shape`. skip: Character codes for dimensions whose length-1 dimensions are not removed. The default is 'XY'. Returns: shape: Sequence of dimension sizes with length-1 dimensions removed. axes: Character codes for dimensions in output `shape`. squeezed: Dimensions were kept (True) or removed (False). Examples: >>> squeeze_axes((5, 1, 2, 1, 1), 'TZYXC') ((5, 2, 1), 'TYX', (True, False, True, True, False)) >>> squeeze_axes((1,), 'Q') ((1,), 'Q', (True,)) """ if len(shape) != len(axes): raise ValueError('dimensions of axes and shape do not match') if not axes: return tuple(shape), axes, () if skip is None: skip = 'X', 'Y', 'width', 'height', 'length' squeezed: list[bool] = [] shape_squeezed: list[int] = [] axes_squeezed: list[str] = [] for size, ax in zip(shape, axes): if size > 1 or ax in skip: squeezed.append(True) shape_squeezed.append(size) axes_squeezed.append(ax) else: squeezed.append(False) if len(shape_squeezed) == 0: squeezed[-1] = True shape_squeezed.append(shape[-1]) axes_squeezed.append(axes[-1]) if isinstance(axes, str): axes = ''.join(axes_squeezed) else: axes = tuple(axes_squeezed) return (tuple(shape_squeezed), axes, tuple(squeezed)) def transpose_axes( image: NDArray[Any], axes: str, /, asaxes: Sequence[str] | None = None, ) -> NDArray[Any]: """Return image array with its axes permuted to match specified axes. Parameters: image: Image array to permute. axes: Character codes for dimensions in image array. asaxes: Character codes for dimensions in output image array. The default is 'CTZYX'. Returns: Transposed image array. A length-1 dimension is added for added dimensions. A view of the input array is returned if possible. Examples: >>> transpose_axes( ... numpy.zeros((2, 3, 4, 5)), 'TYXC', asaxes='CTZYX' ... ).shape (5, 2, 1, 3, 4) """ if asaxes is None: asaxes = 'CTZYX' for ax in axes: if ax not in asaxes: raise ValueError(f'unknown axis {ax}') # add missing axes to image shape = image.shape for ax in reversed(asaxes): if ax not in axes: axes = ax + axes shape = (1,) + shape image = image.reshape(shape) # transpose axes image = image.transpose([axes.index(ax) for ax in asaxes]) return image @overload def reshape_axes( axes: str, shape: Sequence[int], newshape: Sequence[int], /, unknown: str | None = None, ) -> str: ... @overload def reshape_axes( axes: Sequence[str], shape: Sequence[int], newshape: Sequence[int], /, unknown: str | None = None, ) -> Sequence[str]: ... def reshape_axes( axes: str | Sequence[str], shape: Sequence[int], newshape: Sequence[int], /, unknown: str | None = None, ) -> str | Sequence[str]: """Return axes matching new shape. Parameters: axes: Character codes for dimensions in `shape`. shape: Input shape matching `axes`. newshape: Output shape matching output axes. Size must match size of `shape`. unknown: Character used for new axes in output. The default is 'Q'. Returns: Character codes for dimensions in `newshape`. Examples: >>> reshape_axes('YXS', (219, 301, 1), (219, 301)) 'YX' >>> reshape_axes('IYX', (12, 219, 301), (3, 4, 219, 1, 301, 1)) 'QQYQXQ' """ shape = tuple(shape) newshape = tuple(newshape) if len(axes) != len(shape): raise ValueError('axes do not match shape') size = product(shape) newsize = product(newshape) if size != newsize: raise ValueError(f'cannot reshape {shape} to {newshape}') if not axes or not newshape: return '' if isinstance(axes, str) else tuple() lendiff = max(0, len(shape) - len(newshape)) if lendiff: newshape = newshape + (1,) * lendiff i = len(shape) - 1 prodns = 1 prods = 1 result = [] for ns in newshape[::-1]: prodns *= ns while i > 0 and shape[i] == 1 and ns != 1: i -= 1 if ns == shape[i] and prodns == prods * shape[i]: prods *= shape[i] result.append(axes[i]) i -= 1 elif unknown: result.append(unknown) else: unknown = 'Q' result.append(unknown) if isinstance(axes, str): axes = ''.join(reversed(result[lendiff:])) else: axes = tuple(reversed(result[lendiff:])) return axes def order_axes( indices: ArrayLike, /, squeeze: bool = False, ) -> tuple[int, ...]: """Return order of axes sorted by variations in indices. Parameters: indices: Multi-dimensional indices of chunks in array. squeeze: Remove length-1 dimensions of nonvarying axes. Returns: Order of axes sorted by variations in indices. The axis with the least variations in indices is returned first, the axis varying fastest is last. Examples: First axis varies fastest, second axis is squeezed: >>> order_axes([(0, 2, 0), (1, 2, 0), (0, 2, 1), (1, 2, 1)], True) (2, 0) """ diff = numpy.sum(numpy.abs(numpy.diff(indices, axis=0)), axis=0).tolist() order = tuple(sorted(range(len(diff)), key=diff.__getitem__)) if squeeze: order = tuple(i for i in order if diff[i] != 0) return order def check_shape( page_shape: Sequence[int], series_shape: Sequence[int] ) -> bool: """Return if page and series shapes are compatible.""" pi = product(page_shape) pj = product(series_shape) if pi == 0 and pj == 0: return True if pi == 0 or pj == 0: return False if pj % pi: return False series_shape = tuple(reversed(series_shape)) a = 0 pi = pj = 1 for i in reversed(page_shape): pi *= i # if a == len(series_shape): # return not pj % pi for j in series_shape[a:]: a += 1 pj *= j if i == j or pi == pj: break if j == 1: continue if pj != pi: return False return True @overload def subresolution( a: TiffPage, b: TiffPage, /, p: int = 2, n: int = 16 ) -> int | None: ... @overload def subresolution( a: TiffPageSeries, b: TiffPageSeries, /, p: int = 2, n: int = 16 ) -> int | None: ... def subresolution( a: TiffPage | TiffPageSeries, b: TiffPage | TiffPageSeries, /, p: int = 2, n: int = 16, ) -> int | None: """Return level of subresolution of series or page b vs a.""" if a.axes != b.axes or a.dtype != b.dtype: return None level = None for ax, i, j in zip(a.axes.lower(), a.shape, b.shape): if ax in 'xyz': if level is None: for r in range(n): d = p**r if d > i: return None if abs((i / d) - j) < 1.0: level = r break else: return None else: d = p**level if d > i: return None if abs((i / d) - j) >= 1.0: return None elif i != j: return None return level def pyramidize_series( series: list[TiffPageSeries], /, isreduced: bool = False ) -> None: """Pyramidize list of TiffPageSeries in-place. TiffPageSeries that are a subresolution of another TiffPageSeries are appended to the other's TiffPageSeries levels and removed from the list. Levels are to be ordered by size using the same downsampling factor. TiffPageSeries of subifds cannot be pyramid top levels. """ samplingfactors = (2, 3, 4) i = 0 while i < len(series): a = series[i] p = None j = i + 1 if a.keyframe.is_subifd: # subifds cannot be pyramid top levels i += 1 continue while j < len(series): b = series[j] if isreduced and not b.keyframe.is_reduced: # pyramid levels must be reduced j += 1 continue # not a pyramid level if p is None: for f in samplingfactors: if subresolution(a.levels[-1], b, p=f) == 1: p = f break # not a pyramid level else: j += 1 continue # not a pyramid level elif subresolution(a.levels[-1], b, p=p) != 1: j += 1 continue a.levels.append(b) del series[j] i += 1 def stack_pages( pages: Sequence[TiffPage | TiffFrame | None], /, *, tiled: TiledSequence | None = None, lock: threading.RLock | NullContext | None = None, maxworkers: int | None = None, out: OutputType = None, **kwargs: Any, ) -> NDArray[Any]: """Return vertically stacked image arrays from sequence of TIFF pages. Parameters: pages: TIFF pages or frames to stack. tiled: Organize pages in non-overlapping grid. lock: Reentrant lock to synchronize seeks and reads from file. maxworkers: Maximum number of threads to concurrently decode pages or segments. By default, use up to :py:attr:`_TIFF.MAXWORKERS` threads. out: Specifies how image array is returned. By default, a new NumPy array is created. If a *numpy.ndarray*, a writable array to which the images are copied. If a string or open file, the file used to create a memory-mapped array. **kwargs: Additional arguments passed to :py:meth:`TiffPage.asarray`. """ npages = len(pages) if npages == 0: raise ValueError('no pages') if npages == 1: kwargs['maxworkers'] = maxworkers assert pages[0] is not None return pages[0].asarray(out=out, **kwargs) page0 = next(p.keyframe for p in pages if p is not None) assert page0 is not None if tiled is None: shape = (npages,) + page0.shape else: shape = tiled.shape dtype = page0.dtype assert dtype is not None out = create_output(out, shape, dtype) # TODO: benchmark and optimize this if maxworkers is None or maxworkers < 1: # auto-detect page_maxworkers = page0.maxworkers maxworkers = min(npages, TIFF.MAXWORKERS) if maxworkers == 1 or page_maxworkers < 1: maxworkers = page_maxworkers = 1 elif npages < 3: maxworkers = 1 elif ( page_maxworkers <= 2 and page0.compression == 1 and page0.fillorder == 1 and page0.predictor == 1 ): maxworkers = 1 else: page_maxworkers = 1 elif maxworkers == 1: maxworkers = page_maxworkers = 1 elif npages > maxworkers or page0.maxworkers < 2: page_maxworkers = 1 else: page_maxworkers = maxworkers maxworkers = 1 kwargs['maxworkers'] = page_maxworkers fh = page0.parent.filehandle if lock is None: haslock = fh.has_lock if not haslock and maxworkers > 1 or page_maxworkers > 1: fh.set_lock(True) lock = fh.lock else: haslock = True filecache = FileCache(size=max(4, maxworkers), lock=lock) if tiled is None: def func( page: TiffPage | TiffFrame | None, index: int, out: Any = out, filecache: FileCache = filecache, kwargs: dict[str, Any] = kwargs, /, ) -> None: # read, decode, and copy page data if page is not None: filecache.open(page.parent.filehandle) page.asarray(lock=lock, out=out[index], **kwargs) filecache.close(page.parent.filehandle) if maxworkers < 2: for index, page in enumerate(pages): func(page, index) else: page0.decode # init TiffPage.decode function with ThreadPoolExecutor(maxworkers) as executor: for _ in executor.map(func, pages, range(npages)): pass else: # TODO: not used or tested def func_tiled( page: TiffPage | TiffFrame | None, index: tuple[int | slice, ...], out: Any = out, filecache: FileCache = filecache, kwargs: dict[str, Any] = kwargs, /, ) -> None: # read, decode, and copy page data if page is not None: filecache.open(page.parent.filehandle) out[index] = page.asarray(lock=lock, **kwargs) filecache.close(page.parent.filehandle) if maxworkers < 2: for index_tiled, page in zip(tiled.slices(), pages): func_tiled(page, index_tiled) else: page0.decode # init TiffPage.decode function with ThreadPoolExecutor(maxworkers) as executor: for _ in executor.map(func_tiled, pages, tiled.slices()): pass filecache.clear() if not haslock: fh.set_lock(False) return out def create_output( out: OutputType, /, shape: Sequence[int], dtype: DTypeLike, *, mode: Literal['r+', 'w+', 'r', 'c'] = 'w+', suffix: str | None = None, fillvalue: int | float | None = 0, ) -> NDArray[Any] | numpy.memmap[Any, Any]: """Return NumPy array where images of shape and dtype can be copied. Parameters: out: Specifies kind of array to return: `None`: A new array of shape and dtype is created and returned. `numpy.ndarray`: An existing, writable array compatible with `dtype` and `shape`. A view of the array is returned. `'memmap'` or `'memmap:tempdir'`: A memory-map to an array stored in a temporary binary file on disk is created and returned. `str` or open file: File name or file object used to create a memory-map to an array stored in a binary file on disk. The memory-mapped array is returned. shape: Shape of NumPy array to return. dtype: Data type of NumPy array to return. suffix: Suffix of `NamedTemporaryFile` if `out` is 'memmap'. The default suffix is 'memmap'. fillvalue: Value to initialize newly created arrays. If *None*, return an uninitialized array. """ shape = tuple(shape) if out is None: if fillvalue is None: return numpy.empty(shape, dtype) if fillvalue: out = numpy.empty(shape, dtype) out[:] = fillvalue return out return numpy.zeros(shape, dtype) if isinstance(out, numpy.ndarray): if product(shape) != product(out.shape): raise ValueError('incompatible output shape') if not numpy.can_cast(dtype, out.dtype): raise ValueError('incompatible output dtype') return out.reshape(shape) if isinstance(out, str) and out[:6] == 'memmap': import tempfile tempdir = out[7:] if len(out) > 7 else None if suffix is None: suffix = '.memmap' with tempfile.NamedTemporaryFile(dir=tempdir, suffix=suffix) as fh: out = numpy.memmap(fh, shape=shape, dtype=dtype, mode=mode) if fillvalue: out[:] = fillvalue return out out = numpy.memmap(out, shape=shape, dtype=dtype, mode=mode) if fillvalue: out[:] = fillvalue return out def matlabstr2py(matlabstr: str, /) -> Any: r"""Return Python object from Matlab string representation. Use to access ScanImage metadata. Parameters: matlabstr: String representation of Matlab objects. Returns: Matlab structures are returned as `dict`. Matlab arrays or cells are returned as `lists`. Other Matlab objects are returned as `str`, `bool`, `int`, or `float`. Examples: >>> matlabstr2py('1') 1 >>> matlabstr2py("['x y z' true false; 1 2.0 -3e4; NaN Inf @class]") [['x y z', True, False], [1, 2.0, -30000.0], [nan, inf, '@class']] >>> d = matlabstr2py( ... "SI.hChannels.channelType = {'stripe' 'stripe'}\n" ... "SI.hChannels.channelsActive = 2" ... ) >>> d['SI.hChannels.channelType'] ['stripe', 'stripe'] """ # TODO: handle invalid input # TODO: review unboxing of multidimensional arrays def lex(s: str, /) -> list[str]: # return sequence of tokens from Matlab string representation tokens = ['['] while True: t, i = next_token(s) if t is None: break if t == ';': tokens.extend((']', '[')) elif t == '[': tokens.extend(('[', '[')) elif t == ']': tokens.extend((']', ']')) else: tokens.append(t) s = s[i:] tokens.append(']') return tokens def next_token(s: str, /) -> tuple[str | None, int]: # return next token in Matlab string length = len(s) if length == 0: return None, 0 i = 0 while i < length and s[i] == ' ': i += 1 if i == length: return None, i if s[i] in '{[;]}': return s[i], i + 1 if s[i] == "'": j = i + 1 while j < length and s[j] != "'": j += 1 return s[i : j + 1], j + 1 if s[i] == '<': j = i + 1 while j < length and s[j] != '>': j += 1 return s[i : j + 1], j + 1 j = i while j < length and s[j] not in ' {[;]}': j += 1 return s[i:j], j def value(s: str, fail: bool = False, /) -> Any: # return Python value of token s = s.strip() if not s: return s if len(s) == 1: try: return int(s) except Exception as exc: if fail: raise ValueError from exc return s if s[0] == "'": if fail and s[-1] != "'" or "'" in s[1:-1]: raise ValueError return s[1:-1] if s[0] == '<': if fail and s[-1] != '>' or '<' in s[1:-1]: raise ValueError return s if fail and any(i in s for i in " ';[]{}"): raise ValueError if s[0] == '@': return s if s in {'true', 'True'}: return True if s in {'false', 'False'}: return False if s[:6] == 'zeros(': return numpy.zeros([int(i) for i in s[6:-1].split(',')]).tolist() if s[:5] == 'ones(': return numpy.ones([int(i) for i in s[5:-1].split(',')]).tolist() if '.' in s or 'e' in s: try: return float(s) except Exception: pass try: return int(s) except Exception: pass try: return float(s) # nan, inf except Exception as exc: if fail: raise ValueError from exc return s def parse(s: str, /) -> Any: # return Python value from string representation of Matlab value s = s.strip() try: return value(s, True) except ValueError: pass result: list[Any] addto: list[Any] result = addto = [] levels = [addto] for t in lex(s): if t in '[{': addto = [] levels.append(addto) elif t in ']}': x = levels.pop() addto = levels[-1] if len(x) == 1 and isinstance(x[0], (list, str)): addto.append(x[0]) else: addto.append(x) else: addto.append(value(t)) if len(result) == 1 and isinstance(result[0], (list, str)): return result[0] return result if '\r' in matlabstr or '\n' in matlabstr: # structure d = {} for line in matlabstr.splitlines(): line = line.strip() if not line or line[0] == '%': continue k, v = line.split('=', 1) k = k.strip() if any(c in k for c in " ';[]{}<>"): continue d[k] = parse(v) return d return parse(matlabstr) def strptime(datetime_string: str, format: str | None = None, /) -> DateTime: """Return datetime corresponding to date string using common formats. Parameters: datetime_string: String representation of date and time. format: Format of `datetime_string`. By default, several datetime formats commonly found in TIFF files are parsed. Raises: ValueError: `datetime_string` does not match any format. Examples: >>> strptime('2022:08:01 22:23:24') datetime.datetime(2022, 8, 1, 22, 23, 24) """ formats = { '%Y:%m:%d %H:%M:%S': 1, # TIFF6 specification '%Y%m%d %H:%M:%S.%f': 2, # MetaSeries '%Y-%m-%dT%H %M %S.%f': 3, # Pilatus '%Y-%m-%dT%H:%M:%S.%f': 4, # ISO '%Y-%m-%dT%H:%M:%S': 5, # ISO, microsecond is 0 '%Y:%m:%d %H:%M:%S.%f': 6, '%d/%m/%Y %H:%M:%S': 7, '%d/%m/%Y %H:%M:%S.%f': 8, '%m/%d/%Y %I:%M:%S %p': 9, '%m/%d/%Y %I:%M:%S.%f %p': 10, '%Y%m%d %H:%M:%S': 11, '%Y/%m/%d %H:%M:%S': 12, '%Y/%m/%d %H:%M:%S.%f': 13, '%Y-%m-%dT%H:%M:%S%z': 14, '%Y-%m-%dT%H:%M:%S.%f%z': 15, } if format is not None: formats[format] = 0 # highest priority; replaces existing key if any for format, _ in sorted(formats.items(), key=lambda item: item[1]): try: return DateTime.strptime(datetime_string, format) except ValueError: pass raise ValueError( f'time data {datetime_string!r} does not match any format' ) @overload def stripnull( string: bytes, /, null: bytes | None = None, *, first: bool = True ) -> bytes: ... @overload def stripnull( string: str, /, null: str | None = None, *, first: bool = True ) -> str: ... def stripnull( string: str | bytes, /, null: str | bytes | None = None, *, first: bool = True, ) -> str | bytes: r"""Return string truncated at first null character. Use to clean NULL terminated C strings. >>> stripnull(b'bytes\x00\x00') b'bytes' >>> stripnull(b'bytes\x00bytes\x00\x00', first=False) b'bytes\x00bytes' >>> stripnull('string\x00') 'string' """ # TODO: enable deprecation warning # warnings.warn( # ' bytes: r"""Return string truncated at last byte that is 7-bit ASCII. Use to clean NULL separated and terminated TIFF strings. >>> stripascii(b'string\x00string\n\x01\x00') b'string\x00string\n' >>> stripascii(b'\x00') b'' """ # TODO: pythonize this i = len(string) while i: i -= 1 if 8 < string[i] < 127: break else: i = -1 return string[: i + 1] @overload def asbool( value: str, /, true: Sequence[str] | None = None, false: Sequence[str] | None = None, ) -> bool: ... @overload def asbool( value: bytes, /, true: Sequence[bytes] | None = None, false: Sequence[bytes] | None = None, ) -> bool: ... def asbool( value: str | bytes, /, true: Sequence[str | bytes] | None = None, false: Sequence[str | bytes] | None = None, ) -> bool | bytes: """Return string as bool if possible, else raise TypeError. >>> asbool(b' False ') False >>> asbool('ON', ['on'], ['off']) True """ value = value.strip().lower() isbytes = False if true is None: if isinstance(value, bytes): if value == b'true': return True isbytes = True elif value == 'true': return True elif value in true: return True if false is None: if isbytes or isinstance(value, bytes): if value == b'false': return False elif value == 'false': return False elif value in false: return False raise TypeError def astype(value: Any, /, types: Sequence[Any] | None = None) -> Any: """Return argument as one of types if possible. >>> astype('42') 42 >>> astype('3.14') 3.14 >>> astype('True') True >>> astype(b'Neee-Wom') 'Neee-Wom' """ if types is None: types = int, float, asbool, bytes2str for typ in types: try: return typ(value) except (ValueError, AttributeError, TypeError, UnicodeEncodeError): pass return value def rational(arg: float | tuple[int, int], /) -> tuple[int, int]: """Return rational numerator and denominator from float or two integers.""" from fractions import Fraction if isinstance(arg, Sequence): f = Fraction(arg[0], arg[1]) else: f = Fraction.from_float(arg) numerator, denominator = f.as_integer_ratio() if numerator > 4294967295 or denominator > 4294967295: s = 4294967295 / max(numerator, denominator) numerator = round(numerator * s) denominator = round(denominator * s) return numerator, denominator def unique_strings(strings: Iterator[str], /) -> Iterator[str]: """Return iterator over unique strings. >>> list(unique_strings(iter(('a', 'b', 'a')))) ['a', 'b', 'a2'] """ known = set() for i, string in enumerate(strings): if string in known: string += str(i) known.add(string) yield string def format_size(size: int | float, /, threshold: int | float = 1536) -> str: """Return file size as string from byte size. >>> format_size(1234) '1234 B' >>> format_size(12345678901) '11.50 GiB' """ if size < threshold: return f'{size} B' for unit in ('KiB', 'MiB', 'GiB', 'TiB', 'PiB'): size /= 1024.0 if size < threshold: return f'{size:.2f} {unit}' return 'ginormous' def identityfunc(arg: Any, /, *args: Any, **kwargs: Any) -> Any: """Single argument identity function. >>> identityfunc('arg') 'arg' """ return arg def nullfunc(*args: Any, **kwargs: Any) -> None: """Null function. >>> nullfunc('arg', kwarg='kwarg') """ return def sequence(value: Any, /) -> Sequence[Any]: """Return tuple containing value if value is not tuple or list. >>> sequence(1) (1,) >>> sequence([1]) [1] >>> sequence('ab') ('ab',) """ return value if isinstance(value, (tuple, list)) else (value,) def product(iterable: Iterable[int], /) -> int: """Return product of integers. Equivalent of ``math.prod(iterable)``, but multiplying NumPy integers does not overflow. >>> product([2**8, 2**30]) 274877906944 >>> product([]) 1 """ prod = 1 for i in iterable: prod *= int(i) return prod def peek_iterator(iterator: Iterator[Any], /) -> tuple[Any, Iterator[Any]]: """Return first item of iterator and iterator. >>> first, it = peek_iterator(iter((0, 1, 2))) >>> first 0 >>> list(it) [0, 1, 2] """ first = next(iterator) def newiter( first: Any = first, iterator: Iterator[Any] = iterator ) -> Iterator[Any]: yield first yield from iterator return first, newiter() def natural_sorted(iterable: Iterable[str], /) -> list[str]: """Return human-sorted list of strings. Use to sort file names. >>> natural_sorted(['f1', 'f2', 'f10']) ['f1', 'f2', 'f10'] """ def sortkey(x: str, /) -> list[int | str]: return [(int(c) if c.isdigit() else c) for c in re.split(numbers, x)] numbers = re.compile(r'(\d+)') return sorted(iterable, key=sortkey) def epics_datetime(sec: int, nsec: int, /) -> DateTime: """Return datetime object from epicsTSSec and epicsTSNsec tag values. >>> epics_datetime(802117916, 103746502) datetime.datetime(2015, 6, 2, 11, 31, 56, 103746) """ return DateTime.fromtimestamp(sec + 631152000 + nsec / 1e9) def excel_datetime(timestamp: float, epoch: int | None = None, /) -> DateTime: """Return datetime object from timestamp in Excel serial format. Use to convert LSM time stamps. >>> excel_datetime(40237.029999999795) datetime.datetime(2010, 2, 28, 0, 43, 11, 999982) """ if epoch is None: epoch = 693594 return DateTime.fromordinal(epoch) + TimeDelta(timestamp) def julian_datetime(julianday: int, millisecond: int = 0, /) -> DateTime: """Return datetime from days since 1/1/4713 BC and ms since midnight. Convert Julian dates according to MetaMorph. >>> julian_datetime(2451576, 54362783) datetime.datetime(2000, 2, 2, 15, 6, 2, 783000) """ if julianday <= 1721423: # return DateTime.min # ? raise ValueError(f'no datetime before year 1 ({julianday=})') a = julianday + 1 if a > 2299160: alpha = math.trunc((a - 1867216.25) / 36524.25) a += 1 + alpha - alpha // 4 b = a + (1524 if a > 1721423 else 1158) c = math.trunc((b - 122.1) / 365.25) d = math.trunc(365.25 * c) e = math.trunc((b - d) / 30.6001) day = b - d - math.trunc(30.6001 * e) month = e - (1 if e < 13.5 else 13) year = c - (4716 if month > 2.5 else 4715) hour, millisecond = divmod(millisecond, 1000 * 60 * 60) minute, millisecond = divmod(millisecond, 1000 * 60) second, millisecond = divmod(millisecond, 1000) return DateTime(year, month, day, hour, minute, second, millisecond * 1000) def byteorder_isnative(byteorder: str, /) -> bool: """Return if byteorder matches system's byteorder. >>> byteorder_isnative('=') True """ if byteorder in {'=', sys.byteorder}: return True keys = {'big': '>', 'little': '<'} return keys.get(byteorder, byteorder) == keys[sys.byteorder] def byteorder_compare(byteorder: str, other: str, /) -> bool: """Return if byteorders match. >>> byteorder_compare('<', '<') True >>> byteorder_compare('>', '<') False """ if byteorder in {other, '|'} or other == '|': return True if byteorder == '=': byteorder = {'big': '>', 'little': '<'}[sys.byteorder] elif other == '=': other = {'big': '>', 'little': '<'}[sys.byteorder] return byteorder == other def recarray2dict(recarray: numpy.recarray[Any, Any], /) -> dict[str, Any]: """Return numpy.recarray as dictionary. >>> r = numpy.array( ... [(1.0, 2, 'a'), (3.0, 4, 'bc')], ... dtype=[('x', '>> recarray2dict(r) {'x': [1.0, 3.0], 'y': [2, 4], 's': ['a', 'bc']} >>> recarray2dict(r[1]) {'x': 3.0, 'y': 4, 's': 'bc'} """ # TODO: subarrays value: Any result = {} for descr in recarray.dtype.descr: name, dtype = descr[:2] value = recarray[name] if value.ndim == 0: value = value.tolist() if dtype[1] == 'S': value = bytes2str(value) elif value.ndim == 1: value = value.tolist() if dtype[1] == 'S': value = [bytes2str(v) for v in value] result[name] = value return result def xml2dict( xml: str, /, *, sanitize: bool = True, prefix: tuple[str, str] | None = None, sep: str = ',', ) -> dict[str, Any]: """Return XML as dictionary. Parameters: xml: XML data to convert. sanitize: Remove prefix from from etree Element. prefix: Prefixes for dictionary keys. sep: Sequence separator. Examples: >>> xml2dict( ... '1' ... ) {'root': {'key': 1, 'attr': 'name'}} >>> xml2dict('3.5322,-3.14') {'level1': {'level2': (3.5322, -3.14)}} """ try: from defusedxml import ElementTree as etree except ImportError: from xml.etree import ElementTree as etree at, tx = prefix if prefix else ('', '') def astype(value: Any, /) -> Any: # return string value as int, float, bool, tuple, or unchanged if not isinstance(value, str): return value if sep and sep in value: # sequence of numbers? values = [] for val in value.split(sep): v = astype(val) if isinstance(v, str): return value values.append(v) return tuple(values) for t in (int, float, asbool): try: return t(value) except (TypeError, ValueError): pass return value def etree2dict(t: Any, /) -> dict[str, Any]: # adapted from https://stackoverflow.com/a/10077069/453463 key = t.tag if sanitize: key = key.rsplit('}', 1)[-1] d: dict[str, Any] = {key: {} if t.attrib else None} children = list(t) if children: dd = collections.defaultdict(list) for dc in map(etree2dict, children): for k, v in dc.items(): dd[k].append(astype(v)) d = { key: { k: astype(v[0]) if len(v) == 1 else astype(v) for k, v in dd.items() } } if t.attrib: d[key].update((at + k, astype(v)) for k, v in t.attrib.items()) if t.text: text = t.text.strip() if children or t.attrib: if text: d[key][tx + 'value'] = astype(text) else: d[key] = astype(text) return d return etree2dict(etree.fromstring(xml)) def hexdump( data: bytes, /, *, width: int = 75, height: int = 24, snipat: int | float | None = 0.75, modulo: int = 2, ellipsis: str | None = None, ) -> str: """Return hexdump representation of bytes. Parameters: data: Bytes to represent as hexdump. width: Maximum width of hexdump. height: Maximum number of lines of hexdump. snipat: Approximate position at which to split long hexdump. modulo: Number of bytes represented in line of hexdump are modulus of this value. ellipsis: Characters to insert for snipped content of long hexdump. The default is '...'. Examples: >>> hexdump(binascii.unhexlify('49492a00080000000e00fe0004000100')) '49 49 2a 00 08 00 00 00 0e 00 fe 00 04 00 01 00 II*.............' """ size = len(data) if size < 1 or width < 2 or height < 1: return '' if height == 1: addr = b'' bytesperline = min( modulo * (((width - len(addr)) // 4) // modulo), size ) if bytesperline < 1: return '' nlines = 1 else: addr = b'%%0%ix: ' % len(b'%x' % size) bytesperline = min( modulo * (((width - len(addr % 1)) // 4) // modulo), size ) if bytesperline < 1: return '' width = 3 * bytesperline + len(addr % 1) nlines = (size - 1) // bytesperline + 1 if snipat is None or snipat == 1: snipat = height elif 0 < abs(snipat) < 1: snipat = int(math.floor(height * snipat)) if snipat < 0: snipat += height assert isinstance(snipat, int) blocks: list[tuple[int, bytes | None]] if height == 1 or nlines == 1: blocks = [(0, data[:bytesperline])] addr = b'' height = 1 width = 3 * bytesperline elif not height or nlines <= height: blocks = [(0, data)] elif snipat <= 0: start = bytesperline * (nlines - height) blocks = [(start, data[start:])] # (start, None) elif snipat >= height or height < 3: end = bytesperline * height blocks = [(0, data[:end])] # (end, None) else: end1 = bytesperline * snipat end2 = bytesperline * (height - snipat - 2) if size % bytesperline: end2 += size % bytesperline else: end2 += bytesperline blocks = [ (0, data[:end1]), (size - end1 - end2, None), (size - end2, data[size - end2 :]), ] if ellipsis is None: if addr and bytesperline > 3: elps = b' ' * (len(addr % 1) + bytesperline // 2 * 3 - 2) elps += b'...' else: elps = b'...' else: elps = ellipsis.encode('cp1252') result = [] for start, bstr in blocks: if bstr is None: result.append(elps) # 'skip %i bytes' % start) continue hexstr = binascii.hexlify(bstr) strstr = re.sub(br'[^\x20-\x7f]', b'.', bstr) for i in range(0, len(bstr), bytesperline): h = hexstr[2 * i : 2 * i + bytesperline * 2] r = (addr % (i + start)) if height > 1 else addr r += b' '.join(h[i : i + 2] for i in range(0, 2 * bytesperline, 2)) r += b' ' * (width - len(r)) r += strstr[i : i + bytesperline] result.append(r) return b'\n'.join(result).decode('ascii') def isprintable(string: str | bytes, /) -> bool: r"""Return if all characters in string are printable. >>> isprintable('abc') True >>> isprintable(b'\01') False """ string = string.strip() if not string: return True try: return string.isprintable() # type: ignore[union-attr] except Exception: pass try: return string.decode().isprintable() # type: ignore[union-attr] except Exception: pass return False def clean_whitespace(string: str, /, compact: bool = False) -> str: r"""Return string with compressed whitespace. >>> clean_whitespace(' a \n\n b ') 'a\n b' """ string = ( string.replace('\r\n', '\n') .replace('\r', '\n') .replace('\n\n', '\n') .replace('\t', ' ') .replace(' ', ' ') .replace(' ', ' ') .replace(' \n', '\n') ) if compact: string = ( string.replace('\n', ' ') .replace('[ ', '[') .replace(' ', ' ') .replace(' ', ' ') .replace(' ', ' ') ) return string.strip() def indent(*args: Any) -> str: """Return joined string representations of objects with indented lines. >>> print(indent('Title:', 'Text')) Title: Text """ text = '\n'.join(str(arg) for arg in args) return '\n'.join( (' ' + line if line else line) for line in text.splitlines() if line )[2:] def pformat_xml(xml: str | bytes, /) -> str: """Return pretty formatted XML.""" try: from lxml import etree if not isinstance(xml, bytes): xml = xml.encode() tree = etree.parse(io.BytesIO(xml)) xml = etree.tostring( tree, pretty_print=True, xml_declaration=True, encoding=tree.docinfo.encoding, ) assert isinstance(xml, bytes) xml = bytes2str(xml) except Exception: if isinstance(xml, bytes): xml = bytes2str(xml) xml = xml.replace('><', '>\n<') return xml.replace(' ', ' ').replace('\t', ' ') def pformat( arg: Any, /, *, height: int | None = 24, width: int | None = 79, linewidth: int | None = 288, compact: bool = True, ) -> str: """Return pretty formatted representation of object as string. Whitespace might be altered. Long lines are cut off. """ if height is None or height < 1: height = 1024 if width is None or width < 1: width = 256 if linewidth is None or linewidth < 1: linewidth = width npopt = numpy.get_printoptions() numpy.set_printoptions(threshold=100, linewidth=width) if isinstance(arg, bytes): if arg[:5].lower() == b'': arg = bytes2str(arg) if isinstance(arg, bytes): if isprintable(arg): arg = bytes2str(arg) arg = clean_whitespace(arg) else: numpy.set_printoptions(**npopt) return hexdump(arg, width=width, height=height, modulo=1) arg = arg.rstrip() elif isinstance(arg, str): if arg[:5].lower() == '': arg = arg[: 4 * width] if height == 1 else pformat_xml(arg) # too slow # else: # import textwrap # return '\n'.join( # textwrap.wrap(arg, width=width, max_lines=height, tabsize=2) # ) arg = arg.rstrip() elif isinstance(arg, numpy.record): arg = arg.pprint() else: import pprint arg = pprint.pformat(arg, width=width, compact=compact) numpy.set_printoptions(**npopt) if height == 1: arg = arg[: width * width] arg = clean_whitespace(arg, compact=True) return arg[:linewidth] argl = list(arg.splitlines()) if len(argl) > height: arg = '\n'.join( line[:linewidth] for line in argl[: height // 2] + ['...'] + argl[-height // 2 :] ) else: arg = '\n'.join(line[:linewidth] for line in argl[:height]) return arg def snipstr( string: str, /, width: int = 79, *, snipat: int | float | None = None, ellipsis: str | None = None, ) -> str: """Return string cut to specified length. Parameters: string: String to snip. width: Maximum length of returned string. snipat: Approximate position at which to split long strings. The default is 0.5. ellipsis: Characters to insert between splits of long strings. The default is '...'. Examples: >>> snipstr('abcdefghijklmnop', 8) 'abc...op' """ if snipat is None: snipat = 0.5 if ellipsis is None: if isinstance(string, bytes): # type: ignore[unreachable] ellipsis = b'...' else: ellipsis = '\u2026' esize = len(ellipsis) splitlines = string.splitlines() # TODO: finish and test multiline snip result = [] for line in splitlines: if line is None: result.append(ellipsis) continue linelen = len(line) if linelen <= width: result.append(string) continue if snipat is None or snipat == 1: split = linelen elif 0 < abs(snipat) < 1: split = int(math.floor(linelen * snipat)) else: split = int(snipat) if split < 0: split += linelen split = max(split, 0) if esize == 0 or width < esize + 1: if split <= 0: result.append(string[-width:]) else: result.append(string[:width]) elif split <= 0: result.append(ellipsis + string[esize - width :]) elif split >= linelen or width < esize + 4: result.append(string[: width - esize] + ellipsis) else: splitlen = linelen - width + esize end1 = split - splitlen // 2 end2 = end1 + splitlen result.append(string[:end1] + ellipsis + string[end2:]) if isinstance(string, bytes): # type: ignore[unreachable] return b'\n'.join(result) return '\n'.join(result) def enumstr(enum: Any, /) -> str: """Return short string representation of Enum member. >>> enumstr(PHOTOMETRIC.RGB) 'RGB' """ name = enum.name if name is None: name = str(enum) return name def enumarg(enum: type[enum.IntEnum], arg: Any, /) -> enum.IntEnum: """Return enum member from its name or value. Parameters: enum: Type of IntEnum. arg: Name or value of enum member. Returns: Enum member matching name or value. Raises: ValueError: No enum member matches name or value. Examples: >>> enumarg(PHOTOMETRIC, 2) >>> enumarg(PHOTOMETRIC, 'RGB') """ try: return enum(arg) except Exception: try: return enum[arg.upper()] except Exception as exc: raise ValueError(f'invalid argument {arg!r}') from exc def parse_kwargs( kwargs: dict[str, Any], /, *keys: str, **keyvalues: Any ) -> dict[str, Any]: """Return dict with keys from keys|keyvals and values from kwargs|keyvals. Existing keys are deleted from `kwargs`. >>> kwargs = {'one': 1, 'two': 2, 'four': 4} >>> kwargs2 = parse_kwargs(kwargs, 'two', 'three', four=None, five=5) >>> kwargs == {'one': 1} True >>> kwargs2 == {'two': 2, 'four': 4, 'five': 5} True """ result = {} for key in keys: if key in kwargs: result[key] = kwargs[key] del kwargs[key] for key, value in keyvalues.items(): if key in kwargs: result[key] = kwargs[key] del kwargs[key] else: result[key] = value return result def update_kwargs(kwargs: dict[str, Any], /, **keyvalues: Any) -> None: """Update dict with keys and values if keys do not already exist. >>> kwargs = {'one': 1} >>> update_kwargs(kwargs, one=None, two=2) >>> kwargs == {'one': 1, 'two': 2} True """ for key, value in keyvalues.items(): if key not in kwargs: kwargs[key] = value def kwargs_notnone(**kwargs: Any) -> dict[str, Any]: """Return dict of kwargs which values are not None. >>> kwargs_notnone(one=1, none=None) {'one': 1} """ return dict(item for item in kwargs.items() if item[1] is not None) def logger() -> logging.Logger: """Return logging.getLogger('tifffile').""" return logging.getLogger(__name__.replace('tifffile.tifffile', 'tifffile')) def validate_jhove( filename: str, /, jhove: str | None = None, ignore: Collection[str] | None = None, ) -> None: """Validate TIFF file with ``jhove -m TIFF-hul``. JHOVE does not support the BigTIFF format, more than 50 IFDs, and many TIFF extensions. Parameters: filename: Name of TIFF file to validate. jhove: Path of jhove app. The default is 'jhove'. ignore: Jhove error message to ignore. Raises: ValueError: Jhove printed error message and did not contain one of strings in `ignore`. References: - `JHOVE TIFF-hul Module `_ """ import subprocess if ignore is None: ignore = {'More than 50 IFDs', 'Predictor value out of range'} if jhove is None: jhove = 'jhove' out = subprocess.check_output([jhove, filename, '-m', 'TIFF-hul']) if b'ErrorMessage: ' in out: for line in out.splitlines(): line = line.strip() if line.startswith(b'ErrorMessage: '): error = line[14:].decode() for i in ignore: if i in error: break else: raise ValueError(error) break def tiffcomment( arg: str | os.PathLike[Any] | FileHandle | IO[bytes], /, comment: str | bytes | None = None, pageindex: int | None = None, tagcode: int | str | None = None, ) -> str | None: """Return or replace ImageDescription value in first page of TIFF file. Parameters: arg: Specifies TIFF file to open. comment: 7-bit ASCII string or bytes to replace existing tag value. The existing value is zeroed. pageindex: Index of page which ImageDescription tag value to read or replace. The default is 0. tagcode: Code of tag which value to read or replace. The default is 270 (ImageDescription). Returns: None, if `comment` is specified. Else, the current value of the specified tag in the specified page. """ if pageindex is None: pageindex = 0 if tagcode is None: tagcode = 270 mode: Any = None if comment is None else 'r+' with TiffFile(arg, mode=mode) as tif: page = tif.pages[pageindex] if not isinstance(page, TiffPage): raise IndexError(f'TiffPage {pageindex} not found') tag = page.tags.get(tagcode, None) if tag is None: raise ValueError(f'no {TIFF.TAGS[tagcode]} tag found') if comment is None: return tag.value tag.overwrite(comment) return None def tiff2fsspec( filename: str | os.PathLike[Any], /, url: str, *, out: str | None = None, key: int | None = None, series: int | None = None, level: int | None = None, chunkmode: CHUNKMODE | int | str | None = None, fillvalue: int | float | None = None, zattrs: dict[str, Any] | None = None, squeeze: bool | None = None, groupname: str | None = None, version: int | None = None, ) -> None: """Write fsspec ReferenceFileSystem in JSON format for data in TIFF file. By default, the first series, including all levels, is exported. Parameters: filename: Name of TIFF file to reference. url: Remote location of TIFF file without file name(s). out: Name of output JSON file. The default is the `filename` with a '.json' extension. key, series, level, chunkmode, fillvalue, zattrs, squeeze: Passed to :py:meth:`TiffFile.aszarr`. groupname, version: Passed to :py:meth:`ZarrTiffStore.write_fsspec`. """ if out is None: out = os.fspath(filename) + '.json' with TiffFile(filename) as tif: store: ZarrTiffStore with tif.aszarr( key=key, series=series, level=level, chunkmode=chunkmode, fillvalue=fillvalue, zattrs=zattrs, squeeze=squeeze, ) as store: store.write_fsspec(out, url, groupname=groupname, version=version) def lsm2bin( lsmfile: str, /, binfile: str | None = None, *, tile: tuple[int, int] | None = None, verbose: bool = True, ) -> None: """Convert [MP]TZCYX LSM file to series of BIN files. One BIN file containing 'ZCYX' data is created for each position, time, and tile. The position, time, and tile indices are encoded at the end of the filenames. Parameters: lsmfile: Name of LSM file to convert. binfile: Common name of output BIN files. The default is the name of the LSM file without extension. tile: Y and X dimension sizes of BIN files. The default is (256, 256). verbose: Print status of conversion. """ prints: Any = print if verbose else nullfunc if tile is None: tile = (256, 256) if binfile is None: binfile = lsmfile elif binfile.lower() == 'none': binfile = None if binfile: binfile += '_(z%ic%iy%ix%i)_m%%ip%%it%%03iy%%ix%%i.bin' prints('\nOpening LSM file... ', end='', flush=True) timer = Timer() with TiffFile(lsmfile) as lsm: if not lsm.is_lsm: prints('\n', lsm, flush=True) raise ValueError('not a LSM file') series = lsm.series[0] # first series contains the image shape = series.get_shape(False) axes = series.get_axes(False) dtype = series.dtype size = product(shape) * dtype.itemsize prints(timer) # verbose(lsm, flush=True) prints( indent( 'Image', f'axes: {axes}', f'shape: {shape}', f'dtype: {dtype}', f'size: {size}', ), flush=True, ) if axes == 'CYX': shape = (1, 1) + shape elif axes == 'ZCYX': shape = (1,) + shape elif axes == 'MPCYX': shape = shape[:2] + (1, 1) + shape[2:] elif axes == 'MPZCYX': shape = shape[:2] + (1,) + shape[2:] elif not axes.endswith('TZCYX'): raise ValueError('not a *TZCYX LSM file') prints('Copying image from LSM to BIN files', end='', flush=True) timer.start() tiles = shape[-2] // tile[-2], shape[-1] // tile[-1] if binfile: binfile = binfile % (shape[-4], shape[-3], tile[0], tile[1]) shape = (1,) * (7 - len(shape)) + shape # cache for ZCYX stacks and output files data = numpy.empty(shape[3:], dtype=dtype) out = numpy.empty( (shape[-4], shape[-3], tile[0], tile[1]), dtype=dtype ) # iterate over Tiff pages containing data pages = iter(series.pages) for m in range(shape[0]): # mosaic axis for p in range(shape[1]): # position axis for t in range(shape[2]): # time axis for z in range(shape[3]): # z slices page = next(pages) assert page is not None data[z] = page.asarray() for y in range(tiles[0]): # tile y for x in range(tiles[1]): # tile x out[:] = data[ ..., y * tile[0] : (y + 1) * tile[0], x * tile[1] : (x + 1) * tile[1], ] if binfile: out.tofile(binfile % (m, p, t, y, x)) prints('.', end='', flush=True) prints(timer, flush=True) def imshow( data: NDArray[Any], /, *, photometric: PHOTOMETRIC | int | str | None = None, planarconfig: PLANARCONFIG | int | str | None = None, bitspersample: int | None = None, nodata: int | float = 0, interpolation: str | None = None, cmap: Any | None = None, vmin: int | float | None = None, vmax: int | float | None = None, figure: Any = None, subplot: Any = None, title: str | None = None, window_title: str | None = None, dpi: int = 96, maxdim: int | None = None, background: tuple[float, float, float] | str | None = None, show: bool = False, **kwargs: Any, ) -> tuple[Any, Any, Any]: """Plot n-dimensional images with `matplotlib.pyplot`. Parameters: data: Image array to display. photometric: Color space of image. planarconfig: How components of each pixel are stored. bitspersample: Number of bits per channel in integer RGB images. interpolation: Image interpolation method used in `matplotlib.imshow`. The default is 'nearest' for image dimensions > 512, else 'bilinear'. cmap: Colormap mapping non-RGBA scalar data to colors. See `matplotlib.colors.Colormap`. vmin: Minimum of data range covered by colormap. By default, the complete range of the data is covered. vmax: Maximum of data range covered by colormap. By default, the complete range of the data is covered. figure: Matplotlib figure to use for plotting. See `matplotlib.figure.Figure`. subplot: A `matplotlib.pyplot.subplot` axis. title: Subplot title. window_title: Window title. dpi: Resolution of figure. maxdim: Maximum image width and length. background: Background color. show: Display figure. **kwargs: Additional arguments passed to `matplotlib.pyplot.imshow`. Returns: Matplotlib figure, subplot, and plot axis. """ # TODO: rewrite detection of isrgb, iscontig # TODO: use planarconfig if photometric is None: photometric = 'RGB' if maxdim is None: maxdim = 2**16 isrgb = photometric in {'RGB', 'YCBCR'} # 'PALETTE', 'YCBCR' if data.dtype == 'float16': data = data.astype(numpy.float32) if data.dtype.kind == 'b': isrgb = False if isrgb and not ( data.shape[-1] in {3, 4} or (data.ndim > 2 and data.shape[-3] in {3, 4}) ): isrgb = False photometric = 'MINISBLACK' data = data.squeeze() if photometric in { None, 'MINISWHITE', 'MINISBLACK', 'CFA', 'MASK', 'PALETTE', 'LOGL', 'LOGLUV', 'DEPTH_MAP', 'SEMANTIC_MASK', }: data = reshape_nd(data, 2) else: data = reshape_nd(data, 3) dims = data.ndim if dims < 2: raise ValueError('not an image') if dims == 2: dims = 0 isrgb = False else: if isrgb and data.shape[-3] in {3, 4} and data.shape[-1] not in {3, 4}: data = numpy.swapaxes(data, -3, -2) data = numpy.swapaxes(data, -2, -1) elif not isrgb and ( data.shape[-1] < data.shape[-2] // 8 and data.shape[-1] < data.shape[-3] // 8 ): data = numpy.swapaxes(data, -3, -1) data = numpy.swapaxes(data, -2, -1) isrgb = isrgb and data.shape[-1] in {3, 4} dims -= 3 if isrgb else 2 if interpolation is None: threshold = 512 elif isinstance(interpolation, int): threshold = interpolation # type: ignore[unreachable] else: threshold = 0 if isrgb: data = data[..., :maxdim, :maxdim, :maxdim] if threshold: if data.shape[-2] > threshold or data.shape[-3] > threshold: interpolation = 'bilinear' else: interpolation = 'nearest' else: data = data[..., :maxdim, :maxdim] if threshold: if data.shape[-1] > threshold or data.shape[-2] > threshold: interpolation = 'bilinear' else: interpolation = 'nearest' if photometric == 'PALETTE' and isrgb: try: datamax = numpy.max(data) except ValueError: datamax = 1 if datamax > 255: data = data >> 8 # possible precision loss data = data.astype('B', copy=False) elif data.dtype.kind in 'ui': if not (isrgb and data.dtype.itemsize <= 1) or bitspersample is None: try: bitspersample = int(math.ceil(math.log(data.max(), 2))) except Exception: bitspersample = data.dtype.itemsize * 8 elif not isinstance(bitspersample, (int, numpy.integer)): # bitspersample can be tuple, such as (5, 6, 5) bitspersample = data.dtype.itemsize * 8 assert bitspersample is not None datamax = 2**bitspersample if isrgb: if bitspersample < 8: data = data << (8 - bitspersample) elif bitspersample > 8: data = data >> (bitspersample - 8) # precision loss data = data.astype('B', copy=False) elif data.dtype.kind == 'f': if nodata: data = data.copy() data[data == nodata] = numpy.nan try: datamax = numpy.nanmax(data) except ValueError: datamax = 1 if isrgb and datamax > 1.0: if data.dtype.char == 'd': data = data.astype('f') data /= datamax else: data = data / datamax elif data.dtype.kind == 'b': datamax = 1 elif data.dtype.kind == 'c': data = numpy.absolute(data) try: datamax = numpy.nanmax(data) except ValueError: datamax = 1 if isrgb: vmin = 0 else: if vmax is None: vmax = datamax if vmin is None: if data.dtype.kind == 'i': imin = numpy.iinfo(data.dtype).min try: vmin = numpy.min(data) except ValueError: vmin = -1 if vmin == imin: vmin = numpy.min(data[data > imin]) elif data.dtype.kind == 'f': fmin = float(numpy.finfo(data.dtype).min) try: vmin = numpy.nanmin(data) except ValueError: vmin = 0.0 if vmin == fmin: vmin = numpy.nanmin(data[data > fmin]) else: vmin = 0 from matplotlib import pyplot from matplotlib.widgets import Slider if figure is None: pyplot.rc('font', family='sans-serif', weight='normal', size=8) figure = pyplot.figure( dpi=dpi, figsize=(10.3, 6.3), frameon=True, facecolor='1.0', edgecolor='w', ) if window_title is not None: try: figure.canvas.manager.window.title(window_title) except Exception: pass size = len(title.splitlines()) if title else 1 pyplot.subplots_adjust( bottom=0.03 * (dims + 2), top=0.98 - size * 0.03, left=0.1, right=0.95, hspace=0.05, wspace=0.0, ) if subplot is None: subplot = 111 subplot = pyplot.subplot(subplot) if background is None: background = (0.382, 0.382, 0.382) subplot.set_facecolor(background) if title: if isinstance(title, bytes): title = title.decode('Windows-1252') pyplot.title(title, size=11) if cmap is None: if data.dtype.char == '?': cmap = 'gray' elif data.dtype.kind in 'buf' or vmin == 0: cmap = 'viridis' else: cmap = 'coolwarm' if photometric == 'MINISWHITE': cmap += '_r' image = pyplot.imshow( numpy.atleast_2d(data[(0,) * dims].squeeze()), vmin=vmin, vmax=vmax, cmap=cmap, interpolation=interpolation, **kwargs, ) if not isrgb: pyplot.colorbar() # panchor=(0.55, 0.5), fraction=0.05 def format_coord(x: float, y: float, /) -> str: # callback function to format coordinate display in toolbar x = int(x + 0.5) y = int(y + 0.5) try: if dims: return f'{curaxdat[1][y, x]} @ {current} [{y:4}, {x:4}]' return f'{data[y, x]} @ [{y:4}, {x:4}]' except IndexError: return '' def none(event: Any) -> str: return '' subplot.format_coord = format_coord image.get_cursor_data = none # type: ignore[assignment, method-assign] image.format_cursor_data = none # type: ignore[assignment, method-assign] if dims: current = list((0,) * dims) curaxdat = [0, data[tuple(current)].squeeze()] sliders = [ Slider( ax=pyplot.axes((0.125, 0.03 * (axis + 1), 0.725, 0.025)), label=f'Dimension {axis}', valmin=0, valmax=data.shape[axis] - 1, valinit=0, valfmt=f'%.0f [{data.shape[axis]}]', ) for axis in range(dims) ] for slider in sliders: slider.drawon = False def set_image(current, sliders=sliders, data=data): # change image and redraw canvas curaxdat[1] = data[tuple(current)].squeeze() image.set_data(curaxdat[1]) for ctrl, index in zip(sliders, current): ctrl.eventson = False ctrl.set_val(index) ctrl.eventson = True figure.canvas.draw() def on_changed(index, axis, data=data, current=current): # callback function for slider change event index = int(round(index)) curaxdat[0] = axis if index == current[axis]: return if index >= data.shape[axis]: index = 0 elif index < 0: index = data.shape[axis] - 1 current[axis] = index set_image(current) def on_keypressed(event, data=data, current=current): # callback function for key press event key = event.key axis = curaxdat[0] if str(key) in '0123456789': on_changed(key, axis) elif key == 'right': on_changed(current[axis] + 1, axis) elif key == 'left': on_changed(current[axis] - 1, axis) elif key == 'up': curaxdat[0] = 0 if axis == len(data.shape) - 1 else axis + 1 elif key == 'down': curaxdat[0] = len(data.shape) - 1 if axis == 0 else axis - 1 elif key == 'end': on_changed(data.shape[axis] - 1, axis) elif key == 'home': on_changed(0, axis) figure.canvas.mpl_connect('key_press_event', on_keypressed) for axis, ctrl in enumerate(sliders): ctrl.on_changed( lambda k, a=axis: on_changed(k, a) # type: ignore[misc] ) if show: pyplot.show() return figure, subplot, image def askopenfilename(**kwargs: Any) -> str: """Return file name(s) from Tkinter's file open dialog.""" from tkinter import Tk, filedialog root = Tk() root.withdraw() root.update() filenames = filedialog.askopenfilename(**kwargs) root.destroy() return filenames def main() -> int: """Tifffile command line usage main function.""" import optparse # TODO: use argparse logger().setLevel(logging.INFO) parser = optparse.OptionParser( usage='usage: %prog [options] path', description='Display image and metadata in TIFF file.', version=f'%prog {__version__}', prog='tifffile', ) opt = parser.add_option opt( '-p', '--page', dest='page', type='int', default=-1, help='display single page', ) opt( '-s', '--series', dest='series', type='int', default=-1, help='display select series', ) opt( '-l', '--level', dest='level', type='int', default=-1, help='display pyramid level of series', ) opt( '--nomultifile', dest='nomultifile', action='store_true', default=False, help='do not read OME series from multiple files', ) opt( '--maxplots', dest='maxplots', type='int', default=10, help='maximum number of plot windows', ) opt( '--interpol', dest='interpol', metavar='INTERPOL', default=None, help='image interpolation method', ) opt('--dpi', dest='dpi', type='int', default=96, help='plot resolution') opt( '--vmin', dest='vmin', type='int', default=None, help='minimum value for colormapping', ) opt( '--vmax', dest='vmax', type='int', default=None, help='maximum value for colormapping', ) opt( '--cmap', dest='cmap', type='str', default=None, help='colormap name used to map data to colors', ) opt( '--maxworkers', dest='maxworkers', type='int', default=0, help='maximum number of threads', ) opt( '--debug', dest='debug', action='store_true', default=False, help='raise exception on failures', ) opt( '--doctest', dest='doctest', action='store_true', default=False, help='run docstring examples', ) opt('-v', '--detail', dest='detail', type='int', default=2) opt('-q', '--quiet', dest='quiet', action='store_true') settings, path_list = parser.parse_args() path = ' '.join(path_list) if settings.doctest: import doctest try: import tifffile.tifffile as m except ImportError: m = None # type: ignore[assignment] doctest.testmod(m, optionflags=doctest.ELLIPSIS) return 0 if not path: path = askopenfilename( title='Select a TIFF file', filetypes=TIFF.FILEOPEN_FILTER ) if not path: parser.error('No file specified') if any(i in path for i in '?*'): path_list = glob.glob(path) if not path_list: print('No files match the pattern') return 0 # TODO: handle image sequences path = path_list[0] if not settings.quiet: print('\nReading TIFF header:', end=' ', flush=True) timer = Timer() try: tif = TiffFile(path, _multifile=not settings.nomultifile) except Exception as exc: if settings.debug: raise print(f'\n\n{exc.__class__.__name__}: {exc}') return 0 if not settings.quiet: print(timer) if tif.is_ome: settings.norgb = True images: list[tuple[Any, Any, Any]] = [] if settings.maxplots > 0: if not settings.quiet: print('Reading image data:', end=' ', flush=True) def notnone(x: Any, /) -> Any: return next(i for i in x if i is not None) timer.start() try: if settings.page >= 0: images = [ ( tif.asarray( key=settings.page, maxworkers=settings.maxworkers ), tif.pages[settings.page], None, ) ] elif settings.series >= 0: series = tif.series[settings.series] if settings.level >= 0: level = settings.level elif series.is_pyramidal and product(series.shape) > 2**32: level = -1 for r in series.levels: level += 1 if product(r.shape) < 2**32: break else: level = 0 images = [ ( tif.asarray( series=settings.series, level=level, maxworkers=settings.maxworkers, ), notnone(tif.series[settings.series]._pages), tif.series[settings.series], ) ] else: for i, s in enumerate(tif.series[: settings.maxplots]): if settings.level < 0: level = -1 for r in s.levels: level += 1 if product(r.shape) < 2**31: break else: level = settings.level try: images.append( ( tif.asarray( series=i, level=level, maxworkers=settings.maxworkers, ), notnone(s._pages), tif.series[i], ) ) except Exception as exc: images.append((None, notnone(s.pages), None)) if settings.debug: raise print(f'\nSeries {i} raised {exc!r:.128}... ', end='') except Exception as exc: if settings.debug: raise print(f'{exc.__class__.__name__}: {exc}') if not settings.quiet: print(timer) if not settings.quiet: print('Generating report:', end=' ', flush=True) timer.start() try: width = os.get_terminal_size()[0] except Exception: width = 80 info = tif._str(detail=int(settings.detail), width=width - 1) print(timer) print() print(info) print() if images and settings.maxplots > 0: try: from matplotlib import pyplot except ImportError as exc: logger().warning(f' raised {exc!r:.128}') else: for img, page, series in images: if img is None: continue keyframe = page.keyframe vmin, vmax = settings.vmin, settings.vmax if keyframe.nodata: try: if img.dtype.kind == 'f': img[img == keyframe.nodata] = numpy.nan vmin = numpy.nanmin(img) else: vmin = numpy.min(img[img > keyframe.nodata]) except ValueError: pass if tif.is_stk: try: vmin = tif.stk_metadata[ 'MinScale' # type: ignore[index] ] vmax = tif.stk_metadata[ 'MaxScale' # type: ignore[index] ] except KeyError: pass else: if vmax <= vmin: vmin, vmax = settings.vmin, settings.vmax if series: title = f'{tif}\n{page}\n{series}' window_title = f'{tif.filename} series {series.index}' else: title = f'{tif}\n{page}' window_title = f'{tif.filename} page {page.index}' photometric = 'MINISBLACK' if keyframe.photometric != 3: photometric = PHOTOMETRIC(keyframe.photometric).name imshow( img, title=title, window_title=window_title, vmin=vmin, vmax=vmax, cmap=settings.cmap, bitspersample=keyframe.bitspersample, nodata=keyframe.nodata, photometric=photometric, interpolation=settings.interpol, dpi=settings.dpi, show=False, ) pyplot.show() tif.close() return 0 def bytes2str( b: bytes, /, encoding: str | None = None, errors: str = 'strict' ) -> str: """Return Unicode string from encoded bytes up to first NULL character.""" if encoding is None or '16' not in encoding: i = b.find(b'\x00') if i >= 0: b = b[:i] else: # utf-16 i = b.find(b'\x00\x00') if i >= 0: b = b[: i + i % 2] try: return b.decode('utf-8' if encoding is None else encoding, errors) except UnicodeDecodeError: if encoding is not None: raise return b.decode('cp1252', errors) def bytestr(s: str | bytes, /, encoding: str = 'cp1252') -> bytes: """Return bytes from Unicode string, else pass through.""" return s.encode(encoding) if isinstance(s, str) else s # aliases and deprecated TiffReader = TiffFile if __name__ == '__main__': sys.exit(main()) # mypy: allow-untyped-defs, allow-untyped-calls # mypy: disable-error-code="no-any-return, unreachable, redundant-expr" ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1743283853.8097856 tifffile-2025.3.30/tifffile.egg-info/0000777000000000000000000000000014772063216014042 5ustar00././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1743283853.0 tifffile-2025.3.30/tifffile.egg-info/PKG-INFO0000666000000000000000000007671314772063215015154 0ustar00Metadata-Version: 2.4 Name: tifffile Version: 2025.3.30 Summary: Read and write TIFF files Home-page: https://www.cgohlke.com Author: Christoph Gohlke Author-email: cgohlke@cgohlke.com License: BSD-3-Clause Project-URL: Bug Tracker, https://github.com/cgohlke/tifffile/issues Project-URL: Source Code, https://github.com/cgohlke/tifffile Platform: any Classifier: Development Status :: 4 - Beta Classifier: Intended Audience :: Science/Research Classifier: Intended Audience :: Developers Classifier: Operating System :: OS Independent Classifier: Programming Language :: Python :: 3 :: Only Classifier: Programming Language :: Python :: 3.10 Classifier: Programming Language :: Python :: 3.11 Classifier: Programming Language :: Python :: 3.12 Classifier: Programming Language :: Python :: 3.13 Requires-Python: >=3.10 Description-Content-Type: text/x-rst License-File: LICENSE Requires-Dist: numpy Provides-Extra: codecs Requires-Dist: imagecodecs>=2024.12.30; extra == "codecs" Provides-Extra: xml Requires-Dist: defusedxml; extra == "xml" Requires-Dist: lxml; extra == "xml" Provides-Extra: zarr Requires-Dist: zarr<3; extra == "zarr" Requires-Dist: fsspec; extra == "zarr" Provides-Extra: plot Requires-Dist: matplotlib; extra == "plot" Provides-Extra: all Requires-Dist: imagecodecs>=2024.12.30; extra == "all" Requires-Dist: matplotlib; extra == "all" Requires-Dist: defusedxml; extra == "all" Requires-Dist: lxml; extra == "all" Requires-Dist: zarr<3; extra == "all" Requires-Dist: fsspec; extra == "all" Provides-Extra: test Requires-Dist: pytest; extra == "test" Requires-Dist: imagecodecs; extra == "test" Requires-Dist: czifile; extra == "test" Requires-Dist: cmapfile; extra == "test" Requires-Dist: oiffile; extra == "test" Requires-Dist: lfdfiles; extra == "test" Requires-Dist: psdtags; extra == "test" Requires-Dist: roifile; extra == "test" Requires-Dist: lxml; extra == "test" Requires-Dist: zarr<3; extra == "test" Requires-Dist: dask; extra == "test" Requires-Dist: xarray; extra == "test" Requires-Dist: fsspec; extra == "test" Requires-Dist: defusedxml; extra == "test" Requires-Dist: ndtiff; extra == "test" Dynamic: author Dynamic: author-email Dynamic: classifier Dynamic: description Dynamic: description-content-type Dynamic: home-page Dynamic: license Dynamic: license-file Dynamic: platform Dynamic: project-url Dynamic: provides-extra Dynamic: requires-dist Dynamic: requires-python Dynamic: summary Read and write TIFF files ========================= Tifffile is a Python library to (1) store NumPy arrays in TIFF (Tagged Image File Format) files, and (2) read image and metadata from TIFF-like files used in bioimaging. Image and metadata can be read from TIFF, BigTIFF, OME-TIFF, GeoTIFF, Adobe DNG, ZIF (Zoomable Image File Format), MetaMorph STK, Zeiss LSM, ImageJ hyperstack, Micro-Manager MMStack and NDTiff, SGI, NIHImage, Olympus FluoView and SIS, ScanImage, Molecular Dynamics GEL, Aperio SVS, Leica SCN, Roche BIF, PerkinElmer QPTIFF (QPI, PKI), Hamamatsu NDPI, Argos AVS, and Philips DP formatted files. Image data can be read as NumPy arrays or Zarr 2 arrays/groups from strips, tiles, pages (IFDs), SubIFDs, higher-order series, and pyramidal levels. Image data can be written to TIFF, BigTIFF, OME-TIFF, and ImageJ hyperstack compatible files in multi-page, volumetric, pyramidal, memory-mappable, tiled, predicted, or compressed form. Many compression and predictor schemes are supported via the imagecodecs library, including LZW, PackBits, Deflate, PIXTIFF, LZMA, LERC, Zstd, JPEG (8 and 12-bit, lossless), JPEG 2000, JPEG XR, JPEG XL, WebP, PNG, EER, Jetraw, 24-bit floating-point, and horizontal differencing. Tifffile can also be used to inspect TIFF structures, read image data from multi-dimensional file sequences, write fsspec ReferenceFileSystem for TIFF files and image file sequences, patch TIFF tag values, and parse many proprietary metadata formats. :Author: `Christoph Gohlke `_ :License: BSD 3-Clause :Version: 2025.3.30 :DOI: `10.5281/zenodo.6795860 `_ Quickstart ---------- Install the tifffile package and all dependencies from the `Python Package Index `_:: python -m pip install -U tifffile[all] Tifffile is also available in other package repositories such as Anaconda, Debian, and MSYS2. The tifffile library is type annotated and documented via docstrings:: python -c "import tifffile; help(tifffile)" Tifffile can be used as a console script to inspect and preview TIFF files:: python -m tifffile --help See `Examples`_ for using the programming interface. Source code and support are available on `GitHub `_. Support is also provided on the `image.sc `_ forum. Requirements ------------ This revision was tested with the following requirements and dependencies (other versions may work): - `CPython `_ 3.10.11, 3.11.9, 3.12.9, 3.13.2 64-bit - `NumPy `_ 2.2.4 - `Imagecodecs `_ 2025.3.30 (required for encoding or decoding LZW, JPEG, etc. compressed segments) - `Matplotlib `_ 3.10.1 (required for plotting) - `Lxml `_ 5.3.1 (required only for validating and printing XML) - `Zarr `_ 2.18.5 (required only for opening Zarr stores; Zarr 3 is not compatible) - `Fsspec `_ 2025.2.0 (required only for opening ReferenceFileSystem files) Revisions --------- 2025.3.30 - Pass 5110 tests. - Fix for imagecodecs 2025.3.30. 2025.3.13 - Change bytes2str to decode only up to first NULL character (breaking). - Remove stripnull function calls to reduce overhead (#285). - Deprecate stripnull function. 2025.2.18 - Fix julian_datetime milliseconds (#283). - Remove deprecated dtype arguments from imread and FileSequence (breaking). - Remove deprecated imsave and TiffWriter.save function/method (breaking). - Remove deprecated option to pass multiple values to compression (breaking). - Remove deprecated option to pass unit to resolution (breaking). - Remove deprecated enums from TIFF namespace (breaking). - Remove deprecated lazyattr and squeeze_axes functions (breaking). 2025.1.10 - Improve type hints. - Deprecate Python 3.10. 2024.12.12 - Read PlaneProperty from STK UIC1Tag (#280). - Allow 'None' as alias for COMPRESSION.NONE and PREDICTOR.NONE (#274). - Zarr 3 is not supported (#272). 2024.9.20 - Fix writing colormap to ImageJ files (breaking). - Improve typing. - Drop support for Python 3.9. 2024.8.30 - Support writing OME Dataset and some StructuredAnnotations elements. 2024.8.28 - Fix LSM scan types and dimension orders (#269, breaking). - Use IO[bytes] instead of BinaryIO for typing (#268). 2024.8.24 - Do not remove trailing length-1 dimension writing non-shaped file (breaking). - Fix writing OME-TIFF with certain modulo axes orders. - Make imshow NaN aware. 2024.8.10 - Relax bitspersample check for JPEG, JPEG2K, and JPEGXL compression (#265). 2024.7.24 - Fix reading contiguous multi-page series via Zarr store (#67). 2024.7.21 - Fix integer overflow in product function caused by numpy types. - Allow tag reader functions to fail. 2024.7.2 - Enable memmap to create empty files with non-native byte order. - Deprecate Python 3.9, support Python 3.13. 2024.6.18 - Ensure TiffPage.nodata is castable to dtype (breaking, #260). - Support Argos AVS slides. 2024.5.22 - Derive TiffPages, TiffPageSeries, FileSequence, StoredShape from Sequence. - Truncate circular IFD chain, do not raise TiffFileError (breaking). - Deprecate access to TiffPages.pages and FileSequence.files. - Enable DeprecationWarning for enums in TIFF namespace. - Remove some deprecated code (breaking). - Add iccprofile property to TiffPage and parameter to TiffWriter.write. - Do not detect VSI as SIS format. - Limit length of logged exception messages. - Fix docstring examples not correctly rendered on GitHub (#254, #255). 2024.5.10 - Support reading JPEGXL compression in DNG 1.7. - Read invalid TIFF created by IDEAS software. 2024.5.3 - Fix reading incompletely written LSM. - Fix reading Philips DP with extra rows of tiles (#253, breaking). 2024.4.24 - Fix compatibility issue with numpy 2 (#252). 2024.4.18 - Fix write_fsspec when last row of tiles is missing in Philips slide (#249). - Add option not to quote file names in write_fsspec. - Allow compressing bilevel images with deflate, LZMA, and Zstd. 2024.2.12 - Deprecate dtype, add chunkdtype parameter in FileSequence.asarray. - Add imreadargs parameters passed to FileSequence.imread. 2024.1.30 - Fix compatibility issue with numpy 2 (#238). - Enable DeprecationWarning for tuple compression argument. - Parse sequence of numbers in xml2dict. 2023.12.9 - … Refer to the CHANGES file for older revisions. Notes ----- TIFF, the Tagged Image File Format, was created by the Aldus Corporation and Adobe Systems Incorporated. Tifffile supports a subset of the TIFF6 specification, mainly 8, 16, 32, and 64-bit integer, 16, 32, and 64-bit float, grayscale and multi-sample images. Specifically, CCITT and OJPEG compression, chroma subsampling without JPEG compression, color space transformations, samples with differing types, or IPTC, ICC, and XMP metadata are not implemented. Besides classic TIFF, tifffile supports several TIFF-like formats that do not strictly adhere to the TIFF6 specification. Some formats allow file and data sizes to exceed the 4 GB limit of the classic TIFF: - **BigTIFF** is identified by version number 43 and uses different file header, IFD, and tag structures with 64-bit offsets. The format also adds 64-bit data types. Tifffile can read and write BigTIFF files. - **ImageJ hyperstacks** store all image data, which may exceed 4 GB, contiguously after the first IFD. Files > 4 GB contain one IFD only. The size and shape of the up to 6-dimensional image data can be determined from the ImageDescription tag of the first IFD, which is Latin-1 encoded. Tifffile can read and write ImageJ hyperstacks. - **OME-TIFF** files store up to 8-dimensional image data in one or multiple TIFF or BigTIFF files. The UTF-8 encoded OME-XML metadata found in the ImageDescription tag of the first IFD defines the position of TIFF IFDs in the high-dimensional image data. Tifffile can read OME-TIFF files (except multi-file pyramidal) and write NumPy arrays to single-file OME-TIFF. - **Micro-Manager NDTiff** stores multi-dimensional image data in one or more classic TIFF files. Metadata contained in a separate NDTiff.index binary file defines the position of the TIFF IFDs in the image array. Each TIFF file also contains metadata in a non-TIFF binary structure at offset 8. Downsampled image data of pyramidal datasets are stored in separate folders. Tifffile can read NDTiff files. Version 0 and 1 series, tiling, stitching, and multi-resolution pyramids are not supported. - **Micro-Manager MMStack** stores 6-dimensional image data in one or more classic TIFF files. Metadata contained in non-TIFF binary structures and JSON strings define the image stack dimensions and the position of the image frame data in the file and the image stack. The TIFF structures and metadata are often corrupted or wrong. Tifffile can read MMStack files. - **Carl Zeiss LSM** files store all IFDs below 4 GB and wrap around 32-bit StripOffsets pointing to image data above 4 GB. The StripOffsets of each series and position require separate unwrapping. The StripByteCounts tag contains the number of bytes for the uncompressed data. Tifffile can read LSM files of any size. - **MetaMorph Stack, STK** files contain additional image planes stored contiguously after the image data of the first page. The total number of planes is equal to the count of the UIC2tag. Tifffile can read STK files. - **ZIF**, the Zoomable Image File format, is a subspecification of BigTIFF with SGI's ImageDepth extension and additional compression schemes. Only little-endian, tiled, interleaved, 8-bit per sample images with JPEG, PNG, JPEG XR, and JPEG 2000 compression are allowed. Tifffile can read and write ZIF files. - **Hamamatsu NDPI** files use some 64-bit offsets in the file header, IFD, and tag structures. Single, LONG typed tag values can exceed 32-bit. The high bytes of 64-bit tag values and offsets are stored after IFD structures. Tifffile can read NDPI files > 4 GB. JPEG compressed segments with dimensions >65530 or missing restart markers cannot be decoded with common JPEG libraries. Tifffile works around this limitation by separately decoding the MCUs between restart markers, which performs poorly. BitsPerSample, SamplesPerPixel, and PhotometricInterpretation tags may contain wrong values, which can be corrected using the value of tag 65441. - **Philips TIFF** slides store padded ImageWidth and ImageLength tag values for tiled pages. The values can be corrected using the DICOM_PIXEL_SPACING attributes of the XML formatted description of the first page. Tile offsets and byte counts may be 0. Tifffile can read Philips slides. - **Ventana/Roche BIF** slides store tiles and metadata in a BigTIFF container. Tiles may overlap and require stitching based on the TileJointInfo elements in the XMP tag. Volumetric scans are stored using the ImageDepth extension. Tifffile can read BIF and decode individual tiles but does not perform stitching. - **ScanImage** optionally allows corrupted non-BigTIFF files > 2 GB. The values of StripOffsets and StripByteCounts can be recovered using the constant differences of the offsets of IFD and tag values throughout the file. Tifffile can read such files if the image data are stored contiguously in each page. - **GeoTIFF sparse** files allow strip or tile offsets and byte counts to be 0. Such segments are implicitly set to 0 or the NODATA value on reading. Tifffile can read GeoTIFF sparse files. - **Tifffile shaped** files store the array shape and user-provided metadata of multi-dimensional image series in JSON format in the ImageDescription tag of the first page of the series. The format allows multiple series, SubIFDs, sparse segments with zero offset and byte count, and truncated series, where only the first page of a series is present, and the image data are stored contiguously. No other software besides Tifffile supports the truncated format. Other libraries for reading, writing, inspecting, or manipulating scientific TIFF files from Python are `aicsimageio `_, `apeer-ometiff-library `_, `bigtiff `_, `fabio.TiffIO `_, `GDAL `_, `imread `_, `large_image `_, `openslide-python `_, `opentile `_, `pylibtiff `_, `pylsm `_, `pymimage `_, `python-bioformats `_, `pytiff `_, `scanimagetiffreader-python `_, `SimpleITK `_, `slideio `_, `tiffslide `_, `tifftools `_, `tyf `_, `xtiff `_, and `ndtiff `_. References ---------- - TIFF 6.0 Specification and Supplements. Adobe Systems Incorporated. https://www.adobe.io/open/standards/TIFF.html https://download.osgeo.org/libtiff/doc/ - TIFF File Format FAQ. https://www.awaresystems.be/imaging/tiff/faq.html - The BigTIFF File Format. https://www.awaresystems.be/imaging/tiff/bigtiff.html - MetaMorph Stack (STK) Image File Format. http://mdc.custhelp.com/app/answers/detail/a_id/18862 - Image File Format Description LSM 5/7 Release 6.0 (ZEN 2010). Carl Zeiss MicroImaging GmbH. BioSciences. May 10, 2011 - The OME-TIFF format. https://docs.openmicroscopy.org/ome-model/latest/ - UltraQuant(r) Version 6.0 for Windows Start-Up Guide. http://www.ultralum.com/images%20ultralum/pdf/UQStart%20Up%20Guide.pdf - Micro-Manager File Formats. https://micro-manager.org/wiki/Micro-Manager_File_Formats - ScanImage BigTiff Specification. https://docs.scanimage.org/Appendix/ScanImage+BigTiff+Specification.html - ZIF, the Zoomable Image File format. https://zif.photo/ - GeoTIFF File Format https://gdal.org/drivers/raster/gtiff.html - Cloud optimized GeoTIFF. https://github.com/cogeotiff/cog-spec/blob/master/spec.md - Tags for TIFF and Related Specifications. Digital Preservation. https://www.loc.gov/preservation/digital/formats/content/tiff_tags.shtml - CIPA DC-008-2016: Exchangeable image file format for digital still cameras: Exif Version 2.31. http://www.cipa.jp/std/documents/e/DC-008-Translation-2016-E.pdf - The EER (Electron Event Representation) file format. https://github.com/fei-company/EerReaderLib - Digital Negative (DNG) Specification. Version 1.7.1.0, September 2023. https://helpx.adobe.com/content/dam/help/en/photoshop/pdf/DNG_Spec_1_7_1_0.pdf - Roche Digital Pathology. BIF image file format for digital pathology. https://diagnostics.roche.com/content/dam/diagnostics/Blueprint/en/pdf/rmd/Roche-Digital-Pathology-BIF-Whitepaper.pdf - Astro-TIFF specification. https://astro-tiff.sourceforge.io/ - Aperio Technologies, Inc. Digital Slides and Third-Party Data Interchange. Aperio_Digital_Slides_and_Third-party_data_interchange.pdf - PerkinElmer image format. https://downloads.openmicroscopy.org/images/Vectra-QPTIFF/perkinelmer/PKI_Image%20Format.docx - NDTiffStorage. https://github.com/micro-manager/NDTiffStorage - Argos AVS File Format. https://github.com/user-attachments/files/15580286/ARGOS.AVS.File.Format.pdf Examples -------- Write a NumPy array to a single-page RGB TIFF file: >>> data = numpy.random.randint(0, 255, (256, 256, 3), 'uint8') >>> imwrite('temp.tif', data, photometric='rgb') Read the image from the TIFF file as NumPy array: >>> image = imread('temp.tif') >>> image.shape (256, 256, 3) Use the `photometric` and `planarconfig` arguments to write a 3x3x3 NumPy array to an interleaved RGB, a planar RGB, or a 3-page grayscale TIFF: >>> data = numpy.random.randint(0, 255, (3, 3, 3), 'uint8') >>> imwrite('temp.tif', data, photometric='rgb') >>> imwrite('temp.tif', data, photometric='rgb', planarconfig='separate') >>> imwrite('temp.tif', data, photometric='minisblack') Use the `extrasamples` argument to specify how extra components are interpreted, for example, for an RGBA image with unassociated alpha channel: >>> data = numpy.random.randint(0, 255, (256, 256, 4), 'uint8') >>> imwrite('temp.tif', data, photometric='rgb', extrasamples=['unassalpha']) Write a 3-dimensional NumPy array to a multi-page, 16-bit grayscale TIFF file: >>> data = numpy.random.randint(0, 2**12, (64, 301, 219), 'uint16') >>> imwrite('temp.tif', data, photometric='minisblack') Read the whole image stack from the multi-page TIFF file as NumPy array: >>> image_stack = imread('temp.tif') >>> image_stack.shape (64, 301, 219) >>> image_stack.dtype dtype('uint16') Read the image from the first page in the TIFF file as NumPy array: >>> image = imread('temp.tif', key=0) >>> image.shape (301, 219) Read images from a selected range of pages: >>> images = imread('temp.tif', key=range(4, 40, 2)) >>> images.shape (18, 301, 219) Iterate over all pages in the TIFF file and successively read images: >>> with TiffFile('temp.tif') as tif: ... for page in tif.pages: ... image = page.asarray() ... Get information about the image stack in the TIFF file without reading any image data: >>> tif = TiffFile('temp.tif') >>> len(tif.pages) # number of pages in the file 64 >>> page = tif.pages[0] # get shape and dtype of image in first page >>> page.shape (301, 219) >>> page.dtype dtype('uint16') >>> page.axes 'YX' >>> series = tif.series[0] # get shape and dtype of first image series >>> series.shape (64, 301, 219) >>> series.dtype dtype('uint16') >>> series.axes 'QYX' >>> tif.close() Inspect the "XResolution" tag from the first page in the TIFF file: >>> with TiffFile('temp.tif') as tif: ... tag = tif.pages[0].tags['XResolution'] ... >>> tag.value (1, 1) >>> tag.name 'XResolution' >>> tag.code 282 >>> tag.count 1 >>> tag.dtype Iterate over all tags in the TIFF file: >>> with TiffFile('temp.tif') as tif: ... for page in tif.pages: ... for tag in page.tags: ... tag_name, tag_value = tag.name, tag.value ... Overwrite the value of an existing tag, for example, XResolution: >>> with TiffFile('temp.tif', mode='r+') as tif: ... _ = tif.pages[0].tags['XResolution'].overwrite((96000, 1000)) ... Write a 5-dimensional floating-point array using BigTIFF format, separate color components, tiling, Zlib compression level 8, horizontal differencing predictor, and additional metadata: >>> data = numpy.random.rand(2, 5, 3, 301, 219).astype('float32') >>> imwrite( ... 'temp.tif', ... data, ... bigtiff=True, ... photometric='rgb', ... planarconfig='separate', ... tile=(32, 32), ... compression='zlib', ... compressionargs={'level': 8}, ... predictor=True, ... metadata={'axes': 'TZCYX'}, ... ) Write a 10 fps time series of volumes with xyz voxel size 2.6755x2.6755x3.9474 micron^3 to an ImageJ hyperstack formatted TIFF file: >>> volume = numpy.random.randn(6, 57, 256, 256).astype('float32') >>> image_labels = [f'{i}' for i in range(volume.shape[0] * volume.shape[1])] >>> imwrite( ... 'temp.tif', ... volume, ... imagej=True, ... resolution=(1.0 / 2.6755, 1.0 / 2.6755), ... metadata={ ... 'spacing': 3.947368, ... 'unit': 'um', ... 'finterval': 1 / 10, ... 'fps': 10.0, ... 'axes': 'TZYX', ... 'Labels': image_labels, ... }, ... ) Read the volume and metadata from the ImageJ hyperstack file: >>> with TiffFile('temp.tif') as tif: ... volume = tif.asarray() ... axes = tif.series[0].axes ... imagej_metadata = tif.imagej_metadata ... >>> volume.shape (6, 57, 256, 256) >>> axes 'TZYX' >>> imagej_metadata['slices'] 57 >>> imagej_metadata['frames'] 6 Memory-map the contiguous image data in the ImageJ hyperstack file: >>> memmap_volume = memmap('temp.tif') >>> memmap_volume.shape (6, 57, 256, 256) >>> del memmap_volume Create a TIFF file containing an empty image and write to the memory-mapped NumPy array (note: this does not work with compression or tiling): >>> memmap_image = memmap( ... 'temp.tif', shape=(256, 256, 3), dtype='float32', photometric='rgb' ... ) >>> type(memmap_image) >>> memmap_image[255, 255, 1] = 1.0 >>> memmap_image.flush() >>> del memmap_image Write two NumPy arrays to a multi-series TIFF file (note: other TIFF readers will not recognize the two series; use the OME-TIFF format for better interoperability): >>> series0 = numpy.random.randint(0, 255, (32, 32, 3), 'uint8') >>> series1 = numpy.random.randint(0, 255, (4, 256, 256), 'uint16') >>> with TiffWriter('temp.tif') as tif: ... tif.write(series0, photometric='rgb') ... tif.write(series1, photometric='minisblack') ... Read the second image series from the TIFF file: >>> series1 = imread('temp.tif', series=1) >>> series1.shape (4, 256, 256) Successively write the frames of one contiguous series to a TIFF file: >>> data = numpy.random.randint(0, 255, (30, 301, 219), 'uint8') >>> with TiffWriter('temp.tif') as tif: ... for frame in data: ... tif.write(frame, contiguous=True) ... Append an image series to the existing TIFF file (note: this does not work with ImageJ hyperstack or OME-TIFF files): >>> data = numpy.random.randint(0, 255, (301, 219, 3), 'uint8') >>> imwrite('temp.tif', data, photometric='rgb', append=True) Create a TIFF file from a generator of tiles: >>> data = numpy.random.randint(0, 2**12, (31, 33, 3), 'uint16') >>> def tiles(data, tileshape): ... for y in range(0, data.shape[0], tileshape[0]): ... for x in range(0, data.shape[1], tileshape[1]): ... yield data[y : y + tileshape[0], x : x + tileshape[1]] ... >>> imwrite( ... 'temp.tif', ... tiles(data, (16, 16)), ... tile=(16, 16), ... shape=data.shape, ... dtype=data.dtype, ... photometric='rgb', ... ) Write a multi-dimensional, multi-resolution (pyramidal), multi-series OME-TIFF file with optional metadata. Sub-resolution images are written to SubIFDs. Limit parallel encoding to 2 threads. Write a thumbnail image as a separate image series: >>> data = numpy.random.randint(0, 255, (8, 2, 512, 512, 3), 'uint8') >>> subresolutions = 2 >>> pixelsize = 0.29 # micrometer >>> with TiffWriter('temp.ome.tif', bigtiff=True) as tif: ... metadata = { ... 'axes': 'TCYXS', ... 'SignificantBits': 8, ... 'TimeIncrement': 0.1, ... 'TimeIncrementUnit': 's', ... 'PhysicalSizeX': pixelsize, ... 'PhysicalSizeXUnit': 'µm', ... 'PhysicalSizeY': pixelsize, ... 'PhysicalSizeYUnit': 'µm', ... 'Channel': {'Name': ['Channel 1', 'Channel 2']}, ... 'Plane': {'PositionX': [0.0] * 16, 'PositionXUnit': ['µm'] * 16}, ... 'Description': 'A multi-dimensional, multi-resolution image', ... 'MapAnnotation': { # for OMERO ... 'Namespace': 'openmicroscopy.org/PyramidResolution', ... '1': '256 256', ... '2': '128 128', ... }, ... } ... options = dict( ... photometric='rgb', ... tile=(128, 128), ... compression='jpeg', ... resolutionunit='CENTIMETER', ... maxworkers=2, ... ) ... tif.write( ... data, ... subifds=subresolutions, ... resolution=(1e4 / pixelsize, 1e4 / pixelsize), ... metadata=metadata, ... **options, ... ) ... # write pyramid levels to the two subifds ... # in production use resampling to generate sub-resolution images ... for level in range(subresolutions): ... mag = 2 ** (level + 1) ... tif.write( ... data[..., ::mag, ::mag, :], ... subfiletype=1, ... resolution=(1e4 / mag / pixelsize, 1e4 / mag / pixelsize), ... **options, ... ) ... # add a thumbnail image as a separate series ... # it is recognized by QuPath as an associated image ... thumbnail = (data[0, 0, ::8, ::8] >> 2).astype('uint8') ... tif.write(thumbnail, metadata={'Name': 'thumbnail'}) ... Access the image levels in the pyramidal OME-TIFF file: >>> baseimage = imread('temp.ome.tif') >>> second_level = imread('temp.ome.tif', series=0, level=1) >>> with TiffFile('temp.ome.tif') as tif: ... baseimage = tif.series[0].asarray() ... second_level = tif.series[0].levels[1].asarray() ... number_levels = len(tif.series[0].levels) # includes base level ... Iterate over and decode single JPEG compressed tiles in the TIFF file: >>> with TiffFile('temp.ome.tif') as tif: ... fh = tif.filehandle ... for page in tif.pages: ... for index, (offset, bytecount) in enumerate( ... zip(page.dataoffsets, page.databytecounts) ... ): ... _ = fh.seek(offset) ... data = fh.read(bytecount) ... tile, indices, shape = page.decode( ... data, index, jpegtables=page.jpegtables ... ) ... Use Zarr 2 to read parts of the tiled, pyramidal images in the TIFF file: >>> import zarr >>> store = imread('temp.ome.tif', aszarr=True) >>> z = zarr.open(store, mode='r') >>> z >>> z[0] # base layer >>> z[0][2, 0, 128:384, 256:].shape # read a tile from the base layer (256, 256, 3) >>> store.close() Load the base layer from the Zarr 2 store as a dask array: >>> import dask.array >>> store = imread('temp.ome.tif', aszarr=True) >>> dask.array.from_zarr(store, 0) dask.array<...shape=(8, 2, 512, 512, 3)...chunksize=(1, 1, 128, 128, 3)... >>> store.close() Write the Zarr 2 store to a fsspec ReferenceFileSystem in JSON format: >>> store = imread('temp.ome.tif', aszarr=True) >>> store.write_fsspec('temp.ome.tif.json', url='file://') >>> store.close() Open the fsspec ReferenceFileSystem as a Zarr group: >>> import fsspec >>> import imagecodecs.numcodecs >>> imagecodecs.numcodecs.register_codecs() >>> mapper = fsspec.get_mapper( ... 'reference://', fo='temp.ome.tif.json', target_protocol='file' ... ) >>> z = zarr.open(mapper, mode='r') >>> z Create an OME-TIFF file containing an empty, tiled image series and write to it via the Zarr 2 interface (note: this does not work with compression): >>> imwrite( ... 'temp2.ome.tif', ... shape=(8, 800, 600), ... dtype='uint16', ... photometric='minisblack', ... tile=(128, 128), ... metadata={'axes': 'CYX'}, ... ) >>> store = imread('temp2.ome.tif', mode='r+', aszarr=True) >>> z = zarr.open(store, mode='r+') >>> z >>> z[3, 100:200, 200:300:2] = 1024 >>> store.close() Read images from a sequence of TIFF files as NumPy array using two I/O worker threads: >>> imwrite('temp_C001T001.tif', numpy.random.rand(64, 64)) >>> imwrite('temp_C001T002.tif', numpy.random.rand(64, 64)) >>> image_sequence = imread( ... ['temp_C001T001.tif', 'temp_C001T002.tif'], ioworkers=2, maxworkers=1 ... ) >>> image_sequence.shape (2, 64, 64) >>> image_sequence.dtype dtype('float64') Read an image stack from a series of TIFF files with a file name pattern as NumPy or Zarr 2 arrays: >>> image_sequence = TiffSequence('temp_C0*.tif', pattern=r'_(C)(\d+)(T)(\d+)') >>> image_sequence.shape (1, 2) >>> image_sequence.axes 'CT' >>> data = image_sequence.asarray() >>> data.shape (1, 2, 64, 64) >>> store = image_sequence.aszarr() >>> zarr.open(store, mode='r') >>> image_sequence.close() Write the Zarr 2 store to a fsspec ReferenceFileSystem in JSON format: >>> store = image_sequence.aszarr() >>> store.write_fsspec('temp.json', url='file://') Open the fsspec ReferenceFileSystem as a Zarr 2 array: >>> import fsspec >>> import tifffile.numcodecs >>> tifffile.numcodecs.register_codec() >>> mapper = fsspec.get_mapper( ... 'reference://', fo='temp.json', target_protocol='file' ... ) >>> zarr.open(mapper, mode='r') Inspect the TIFF file from the command line:: $ python -m tifffile temp.ome.tif ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1743283853.0 tifffile-2025.3.30/tifffile.egg-info/SOURCES.txt0000666000000000000000000000115114772063215015723 0ustar00ACKNOWLEDGEMENTS.rst CHANGES.rst LICENSE MANIFEST.in README.rst setup.py docs/conf.py docs/make.py docs/_static/custom.css examples/earthbigdata.py examples/issue125.py tests/conftest.py tests/test_tifffile.py tifffile/__init__.py tifffile/__main__.py tifffile/_imagecodecs.py tifffile/geodb.py tifffile/lsm2bin.py tifffile/numcodecs.py tifffile/py.typed tifffile/tiff2fsspec.py tifffile/tiffcomment.py tifffile/tifffile.py tifffile.egg-info/PKG-INFO tifffile.egg-info/SOURCES.txt tifffile.egg-info/dependency_links.txt tifffile.egg-info/entry_points.txt tifffile.egg-info/requires.txt tifffile.egg-info/top_level.txt././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1743283853.0 tifffile-2025.3.30/tifffile.egg-info/dependency_links.txt0000666000000000000000000000000114772063215020107 0ustar00 ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1743283853.0 tifffile-2025.3.30/tifffile.egg-info/entry_points.txt0000666000000000000000000000023314772063215017335 0ustar00[console_scripts] lsm2bin = tifffile.lsm2bin:main tiff2fsspec = tifffile.tiff2fsspec:main tiffcomment = tifffile.tiffcomment:main tifffile = tifffile:main ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1743283853.0 tifffile-2025.3.30/tifffile.egg-info/requires.txt0000666000000000000000000000045614772063215016446 0ustar00numpy [all] imagecodecs>=2024.12.30 matplotlib defusedxml lxml zarr<3 fsspec [codecs] imagecodecs>=2024.12.30 [plot] matplotlib [test] pytest imagecodecs czifile cmapfile oiffile lfdfiles psdtags roifile lxml zarr<3 dask xarray fsspec defusedxml ndtiff [xml] defusedxml lxml [zarr] zarr<3 fsspec ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1743283853.0 tifffile-2025.3.30/tifffile.egg-info/top_level.txt0000666000000000000000000000001114772063215016563 0ustar00tifffile