././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1633017864.7450302 spectral-cube-0.6.0/0000755000175100001710000000000000000000000013606 5ustar00runnerdocker././@PaxHeader0000000000000000000000000000003200000000000010210 xustar0026 mtime=1633017864.72103 spectral-cube-0.6.0/.github/0000755000175100001710000000000000000000000015146 5ustar00runnerdocker././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1633017864.7250302 spectral-cube-0.6.0/.github/workflows/0000755000175100001710000000000000000000000017203 5ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1633017852.0 spectral-cube-0.6.0/.github/workflows/main.yml0000644000175100001710000000507600000000000020662 0ustar00runnerdockername: Run tests on: [push, pull_request] jobs: tests: name: ${{ matrix.name}} runs-on: ${{ matrix.os }} strategy: fail-fast: false matrix: include: - os: ubuntu-latest python-version: 3.9 name: Python 3.9 with minimal dependencies toxenv: py39-test - os: ubuntu-latest python-version: 3.9 name: Python 3.9 with no visualization + coverage toxenv: py39-test-novis-cov - os: ubuntu-latest python-version: 3.8 name: Python 3.8 with minimal dependencies toxenv: py38-test - os: ubuntu-latest python-version: 3.7 name: Python 3.7 with minimal dependencies toxenv: py37-test - os: ubuntu-latest python-version: 3.9 name: Python 3.9 with all non-visualization dependencies (except CASA) toxenv: py39-test-novis - os: ubuntu-18.04 python-version: 3.6 name: Python 3.6 with minimal dependencies and CASA toxenv: py36-test-casa - os: ubuntu-latest python-version: 3.9 name: Python 3.9, all non-visualization dependencies, and dev versions of key dependencies toxenv: py39-test-novis-dev - os: ubuntu-latest python-version: 3.8 name: Python 3.8 with all non-visualization dependencies (except CASA) toxenv: py38-test-novis - os: macos-latest python-version: 3.9 name: Python 3.9 with all non-visualization dependencies (except CASA) on MacOS X toxenv: py39-test-novis - os: windows-latest python-version: 3.9 name: Python 3.9, all dependencies, and dev versions of key dependencies on Windows toxenv: py39-test-all-dev - os: ubuntu-latest python-version: 3.9 name: Documentation toxenv: build_docs steps: - uses: actions/checkout@v2 - name: Set up Python ${{ matrix.python-version }} uses: actions/setup-python@v2 with: python-version: ${{ matrix.python-version }} - name: Install testing dependencies run: python -m pip install tox codecov - name: Run tests with ${{ matrix.name }} if: ${{ contains(matrix.toxenv,'-cov') }} run: tox -v -e ${{ matrix.toxenv }} - name: Upload coverage to codecov if: ${{ contains(matrix.toxenv,'-cov') }} uses: codecov/codecov-action@v1.0.13 with: file: ./coverage.xml ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1633017852.0 spectral-cube-0.6.0/.github/workflows/publish.yml0000644000175100001710000000172700000000000021403 0ustar00runnerdockername: Build and upload to PyPI on: [push, pull_request] jobs: build_sdist_and_wheel: name: Build source distribution runs-on: ubuntu-latest steps: - uses: actions/checkout@v2 - uses: actions/setup-python@v2 name: Install Python with: python-version: '3.9' - name: Install build run: python -m pip install build - name: Build sdist run: python -m build --sdist --wheel --outdir dist/ . - uses: actions/upload-artifact@v2 with: path: dist/* upload_pypi: name: Upload to PyPI needs: [build_sdist_and_wheel] runs-on: ubuntu-latest if: github.event_name == 'push' && startsWith(github.event.ref, 'refs/tags/v') steps: - uses: actions/download-artifact@v2 with: name: artifact path: dist - uses: pypa/gh-action-pypi-publish@master with: user: __token__ password: ${{ secrets.pypi_password }} ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1633017852.0 spectral-cube-0.6.0/.gitignore0000644000175100001710000000116000000000000015574 0ustar00runnerdocker# Byte-compiled / optimized / DLL files __pycache__/ *.py[cod] # C extensions *.so # Distribution / packaging .Python env/ bin/ build/ develop-eggs/ dist/ eggs/ lib/ lib64/ parts/ sdist/ var/ *.egg-info/ .installed.cfg *.egg # Installer logs pip-log.txt pip-delete-this-directory.txt # Unit test / coverage reports htmlcov/ .tox/ .coverage .cache nosetests.xml coverage.xml # Translations *.mo # Mr Developer .mr.developer.cfg .project .pydevproject # Rope .ropeproject # Django stuff: *.log *.pot # Sphinx documentation docs/_build/ docs/api *.fits # Other generated stuff */version.py */cython_version.py .tmp ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1633017852.0 spectral-cube-0.6.0/.readthedocs.yml0000644000175100001710000000025700000000000016700 0ustar00runnerdockerversion: 2 build: image: latest # Install regular dependencies. python: version: 3.7 install: - method: pip path: . extra_requirements: - docs ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1633017852.0 spectral-cube-0.6.0/CHANGES.rst0000644000175100001710000003351000000000000015412 0ustar00runnerdocker0.6 (2021-09-30) ---------------- - Fix `convolve_to` when units are in Jy/beam. Add error/warnings for operations for all operations that change the spatial resolution for Jy/beam cubes. - Add ``argmax_world`` and ``argmin_world`` to return the argmin/max position in WCS coordinates. This is ONLY defined for independent WCS axes (e.g., spectral) #680 - Bugfix: subcube producing spatial offsets for large images #666 - Switch to using standalone casa-formats-io package for reading in CASA .image files (this was split out from spectral-cube). #684 - Make it possible to customize ``target_chunksize`` in the CASA reader. #705 - Fix support for dask.distributed. #712 - Bugfix: PhysicalType are now strings. #709 0.5 (2020-09-17) ---------------- - Bugfix: subcubes from compound regions previously did not work. #601 - Bugfix: VaryingResolutionSpectralCube.mask_channels now preserves previous mask. #620 - Refactor tests to use fixtures for accessing data instead of needing to run a script to generate test files. #598 - Refactor package infrastructure to no longer use astropy-helpers. #599 - Switch to using unified I/O infrastructure from Astropy. #600 - Bugfix: fix slicing of cubes with mask set to None. #621 - Refactor CASA I/O to use dask to access the array/mask data directly and to use only Python and Numpy to access image metadata. CASA images can now be read without CASA installed. #607, #609, #613 - Add new dask-friendly classes ``DaskSpectralCube`` and ``DaskVaryingResolutionSpectralCube`` which use dask to efficiently carry out calculations. #618 - Add new ``statistics`` method on ``DaskSpectralCube`` which allows several global statistics to be computed by loading each cube chunk only once. #663 0.4.5 (2019-11-30) ------------------ - Added support for casatools-based io in #541 and beam reading from CASA images in #543 - Add support for ``update_function`` in the joblib-based job distributor in #534 - Add tests for WCS equivalence in reprojected images in #589 - Improve error messages when CASA files are read incorrectly in #584 - fix a small bug in matplotlib figure saving in #583 - Allow for reading of beamless cubes in #582 - Add support for 2d world functions in #575 and extrema in #552 - Handle kernels defined as quantities in smoothing in #578 - Fix bug with NPOL header keyword in #576 - Convolution will be skipped if beans are equal-sized in #573 - Fix one-D sliceing with no beam in #568 - Paralellization documentation improvement in #557 - Astropy-helpers updated to 2.0.10 in #553 - Fixed some future warnings in #565 - Added a new documentation example in #548 - Added channel map making capability in #551 - Fix warnings when beam is not defined in #561 - Improvment to joblib parallelization in #564 - Add ppbeam attribute to lower-dimensional objects #549 - Handle CASA file beams in #543 and #545 - Add support for CASA reading using casatools (casa6) in #541 - Bugfix for slicing of different shapes in #532 - Fixes to yt integratino in #531 - Add `unmasked_beams` attribute and change many beams behaviors in #502 - Bugfix for downsampled WCS corners in #525 - Performance enhancement to world extrema in #524 - Simplify conversion from CASA coordsys to FITS-WCS #593 - Add chunked file reading for CASA .image opening #592 - Dropped python 3.5 testing in #592 0.4.4 (2019-02-20) ------------------ - Refactor all beam parameters into mix-in classes; added BaseOneDSpectrum for common functionality between OneDSpectrum and VaryingResolutionOneDSpectrum. Retain beam objects when doing arithmetic with LDOs/ (https://github.com/radio-astro-tools/spectral-cube/pull/521) - Refactor OneDSpectrum objects to include a single beam if they were produced from a cube with a single beam to enable K<->Jy conversions (https://github.com/radio-astro-tools/spectral-cube/pull/510) - Bugfix: fix compatibility of to_glue with latest versions of glue. (https://github.com/radio-astro-tools/spectral-cube/pull/491) - Refactor to use regions instead of pyregion. Adds CRTF support (https://github.com/radio-astro-tools/spectral-cube/pull/488) - Direct downsampling tools added, both in-memory and memmap (https://github.com/radio-astro-tools/spectral-cube/pull/486) 0.4.3 (2018-04-05) ------------------ - Refactor spectral smoothing tools to allow parallelized application *and* memory mapped output (to avoid loading cube into memory). Created ``apply_function_parallel_spectral`` to make this general. Added ``joblib`` as a dependency. (https://github.com/radio-astro-tools/spectral-cube/pull/474) - Bugfix: Reversing a cube's spectral axis should now do something reasonable instead of unreasonable (https://github.com/radio-astro-tools/spectral-cube/pull/478) 0.4.2 (2018-02-21) ------------------ - Bugfix and enhancement: handle multiple beams using radio_beam's multiple-beams feature. This allows `convolve_to` to work when some beams are masked out. Also removes ``cube_utils.average_beams``, which is now implemented directly in radio_beam (https://github.com/radio-astro-tools/spectral-cube/pull/437) - Added a variety of stacking tools, both for stacking full velocity cubes of different lines and for stacking full spectra based on a velocity field (https://github.com/radio-astro-tools/spectral-cube/pull/446, https://github.com/radio-astro-tools/spectral-cube/pull/453, https://github.com/radio-astro-tools/spectral-cube/pull/457, https://github.com/radio-astro-tools/spectral-cube/pull/465) 0.4.1 (2017-10-17) ------------------ - Add SpectralCube.with_beam and Projection.with_beam for attaching beam objects. Raise error for position-spectral slices of VRSCs (https://github.com/radio-astro-tools/spectral-cube/pull/433) - Raise a nicer error if no data is present in the default or selected HDU (https://github.com/radio-astro-tools/spectral-cube/pull/424) - Check mask inputs to OneDSpectrum and add mask handling for OneDSpectrum.spectral_interpolate (https://github.com/radio-astro-tools/spectral-cube/pull/400) - Improve exception if cube does not have two celestial and one spectral dimesnion (https://github.com/radio-astro-tools/spectral-cube/pull/425) - Add creating a Projection from a FITS HDU (https://github.com/radio-astro-tools/spectral-cube/pull/376) - Deprecate numpy <=1.8 because nanmedian is needed (https://github.com/radio-astro-tools/spectral-cube/pull/373) - Add tools for masking bad beams in VaryingResolutionSpectralCubes (https://github.com/radio-astro-tools/spectral-cube/pull/373) - Don't warn if no beam was found in a cube (https://github.com/radio-astro-tools/spectral-cube/pull/422) 0.4.0 (2016-09-06) ------------------ - Handle equal beams when convolving cubes spatially. (https://github.com/radio-astro-tools/spectral-cube/pull/356) - Whole cube convolution & reprojection has been added, including tools to smooth spectrally and spatially to force two cubes onto an identical grid. (https://github.com/radio-astro-tools/spectral-cube/pull/313) - Bugfix: files larger than the available memory are now readable again because ``spectral-cube`` does not encourage you to modify cubes inplace (https://github.com/radio-astro-tools/spectral-cube/pull/299) - Cube planes with bad beams will be masked out (https://github.com/radio-astro-tools/spectral-cube/pull/298) - Added a new cube type, VaryingResolutionSpectralCube, meant to handle CASA-produced cubes that have different beams in each channel (https://github.com/radio-astro-tools/spectral-cube/pull/292) - Added tests for new functionality in OneDSpectrum (https://github.com/radio-astro-tools/spectral-cube/pull/277) - Split out common functionality between SpectralCube and LowerDimensionalObject into BaseNDClass and SpectralAxisMixinClass (https://github.com/radio-astro-tools/spectral-cube/pull/274) - added new linewidth_sigma and linewidth_fwhm methods to SpectralCube for computing linewidth maps, and make sure the documentation is clear that moment(order=2) is a variance map. (https://github.com/radio-astro-tools/spectral-cube/pull/275) - fixed significant error when the cube WCS includes a cd matrix. This error resulted in incorrect spectral coordinate conversions (https://github.com/radio-astro-tools/spectral-cube/pull/276) 0.3.2 (2016-07-11) ------------------ - Bugfix in configuration 0.3.1 (2016-02-04) ------------------ - Preserve metadata when making projections (https://github.com/radio-astro-tools/spectral-cube/pull/250) - bugfix: cube._data cannot be a quantity (https://github.com/radio-astro-tools/spectral-cube/pull/251) - partial fix for ds9 import bug (https://github.com/radio-astro-tools/spectral-cube/pull/253) - preserve WCS information in projections (https://github.com/radio-astro-tools/spectral-cube/pull/256) - whitespace stripped from BUNIT (https://github.com/radio-astro-tools/spectral-cube/pull/257) - bugfix: sometimes cube would be read into memory when it should not be (https://github.com/radio-astro-tools/spectral-cube/pull/259) - more projection preservation fixes (https://github.com/radio-astro-tools/spectral-cube/pull/265) - correct jy/beam capitalization (https://github.com/radio-astro-tools/spectral-cube/pull/267) - convenience attribute for beam access (https://github.com/radio-astro-tools/spectral-cube/pull/268) - fix beam reading, which would claim failure even during success (https://github.com/radio-astro-tools/spectral-cube/pull/271) 0.3.0 (2015-08-16) ------------------ - Add experimental line-finding tool using astroquery.splatalogue (https://github.com/radio-astro-tools/spectral-cube/pull/210) - Bugfixes (211,212,217) - Add arithmetic operations (add, subtract, divide, multiply, power) (https://github.com/radio-astro-tools/spectral-cube/pull/220). These operations will not be permitted on large cubes by default, but will require the user to specify that they are allowed using the attribute ``allow_huge_operations`` - Implemented slicewise stddev and mean (https://github.com/radio-astro-tools/spectral-cube/pull/225) - Bugfix: prevent a memory leak when creating a large number of Cubes (https://github.com/radio-astro-tools/spectral-cube/pull/233) - Provide a ``base`` attribute so that tools like joblib can operate on ``SpectralCube`` s as memory maps (https://github.com/radio-astro-tools/spectral-cube/pull/230) - Masks have a quicklook method (https://github.com/radio-astro-tools/spectral-cube/pull/228) - Memory mapping can be disabled (https://github.com/radio-astro-tools/spectral-cube/pull/226) - Add xor operations for Masks (https://github.com/radio-astro-tools/spectral-cube/pull/241) - Added a new StokesSpectralCube class to deal with 4-d cubes (https://github.com/radio-astro-tools/spectral-cube/pull/249) 0.2.2 (2015-03-12) ------------------ - Output mask as a CASA image https://github.com/radio-astro-tools/spectral-cube/pull/171 - ytcube exports to .obj and .ply too https://github.com/radio-astro-tools/spectral-cube/pull/173 - Fix air wavelengths, which were mistreated (https://github.com/radio-astro-tools/spectral-cube/pull/186) - Add support for sum/mean/std over both spatial axes to return a OneDSpectrum object. This PR also removes numpy 1.5-1.7 tests, since many `spectral_cube` functions are not compatible with these versions of numpy (https://github.com/radio-astro-tools/spectral-cube/pull/188) 0.2.1 (2014-12-03) ------------------ - CASA cube readers now compatible with ALMA .image files (tested on Cycle 2 data) https://github.com/radio-astro-tools/spectral-cube/pull/165 - Spectral quicklooks available https://github.com/radio-astro-tools/spectral-cube/pull/164 now that 1D slices are possible https://github.com/radio-astro-tools/spectral-cube/pull/157 - `to_pvextractor` tool allows easy export to `pvextractor `_ https://github.com/radio-astro-tools/spectral-cube/pull/160 - `to_glue` sends the cube to `glue `_ https://github.com/radio-astro-tools/spectral-cube/pull/153 0.2 (2014-09-11) ---------------- - `moments` preserve spectral units now https://github.com/radio-astro-tools/spectral-cube/pull/118 - Initial support added for Air Wavelength. This is only 1-way support, round-tripping (vacuum->air) is not supported yet. https://github.com/radio-astro-tools/spectral-cube/pull/117 - Integer slices (single frames) are supported https://github.com/radio-astro-tools/spectral-cube/pull/113 - Bugfix: BUNIT capitalized https://github.com/radio-astro-tools/spectral-cube/pull/112 - Masks can be any array that is broadcastable to the cube shape https://github.com/radio-astro-tools/spectral-cube/pull/115 - Added `.header` and `.hdu` convenience methods https://github.com/radio-astro-tools/spectral-cube/pull/120 - Added public functions `apply_function` and `apply_numpy_function` that allow functions to be run on cubes while preserving important metadata (e.g., WCS) - Added a quicklook tool using aplpy to view slices (https://github.com/radio-astro-tools/spectral-cube/pull/131) - Added subcube and ds9 region extraction tools (https://github.com/radio-astro-tools/spectral-cube/pull/128) - Added a `to_yt` function for easily converting between SpectralCube and yt datacube/dataset objects (https://github.com/radio-astro-tools/spectral-cube/pull/90, https://github.com/radio-astro-tools/spectral-cube/pull/129) - Masks' `.include()` method works without ``data`` arguments. (https://github.com/radio-astro-tools/spectral-cube/pull/147) - Allow movie name to be specified in yt movie creation (https://github.com/radio-astro-tools/spectral-cube/pull/145) - add `flattened_world` method to get the world coordinates corresponding to each pixel in the flattened array (https://github.com/radio-astro-tools/spectral-cube/pull/146) 0.1 (2014-06-01) ---------------- - Initial Release. ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1633017852.0 spectral-cube-0.6.0/LICENSE.rst0000644000175100001710000000272600000000000015431 0ustar00runnerdockerCopyright (c) 2014, spectral-cube developers All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: 1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. 2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. 3. Neither the name of the copyright holder nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1633017852.0 spectral-cube-0.6.0/MANIFEST.in0000644000175100001710000000047400000000000015351 0ustar00runnerdockerinclude README.md include CHANGES.rst include LICENSE.rst include pyproject.toml include setup.cfg recursive-include *.pyx *.c *.pxd recursive-include docs * recursive-include licenses * recursive-include cextern * recursive-include scripts * prune build prune docs/_build prune docs/api global-exclude *.pyc *.o ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1633017864.7450302 spectral-cube-0.6.0/PKG-INFO0000644000175100001710000000416100000000000014705 0ustar00runnerdockerMetadata-Version: 2.1 Name: spectral-cube Version: 0.6.0 Summary: A package for interaction with spectral cubes Home-page: http://spectral-cube.readthedocs.org Author: Adam Ginsburg, Tom Robitaille, Chris Beaumont, Adam Leroy, Erik Rosolowsky, and Eric Koch Author-email: adam.g.ginsburg@gmail.com License: BSD Platform: UNKNOWN Provides-Extra: test Provides-Extra: docs Provides-Extra: novis Provides-Extra: all License-File: LICENSE.rst About ===== |Join the chat at https://gitter.im/radio-astro-tools/spectral-cube| This package aims to facilitate the reading, writing, manipulation, and analysis of spectral data cubes. More information is available in the documentation, avaliable `online at readthedocs.org `__. .. figure:: http://img.shields.io/badge/powered%20by-AstroPy-orange.svg?style=flat :alt: Powered by Astropy Badge Powered by Astropy Badge Credits ======= This package is developed by: - Chris Beaumont (`@ChrisBeaumont `__) - Adam Ginsburg (`@keflavich `__) - Adam Leroy (`@akleroy `__) - Thomas Robitaille (`@astrofrog `__) - Erik Rosolowsky (`@low-sky `__) - Eric Koch (`@e-koch `__) Build and coverage status ========================= |Build Status| |Coverage Status| |DOI| .. |Join the chat at https://gitter.im/radio-astro-tools/spectral-cube| image:: https://badges.gitter.im/Join%20Chat.svg :target: https://gitter.im/radio-astro-tools/spectral-cube?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge&utm_content=badge .. |Build Status| image:: https://travis-ci.org/radio-astro-tools/spectral-cube.png?branch=master :target: https://travis-ci.org/radio-astro-tools/spectral-cube .. |Coverage Status| image:: https://coveralls.io/repos/radio-astro-tools/spectral-cube/badge.svg?branch=master :target: https://coveralls.io/r/radio-astro-tools/spectral-cube?branch=master .. |DOI| image:: https://zenodo.org/badge/doi/10.5281/zenodo.11485.svg :target: http://dx.doi.org/10.5281/zenodo.11485 ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1633017852.0 spectral-cube-0.6.0/README.rst0000644000175100001710000000326600000000000015304 0ustar00runnerdockerAbout ===== |Join the chat at https://gitter.im/radio-astro-tools/spectral-cube| This package aims to facilitate the reading, writing, manipulation, and analysis of spectral data cubes. More information is available in the documentation, avaliable `online at readthedocs.org `__. .. figure:: http://img.shields.io/badge/powered%20by-AstroPy-orange.svg?style=flat :alt: Powered by Astropy Badge Powered by Astropy Badge Credits ======= This package is developed by: - Chris Beaumont (`@ChrisBeaumont `__) - Adam Ginsburg (`@keflavich `__) - Adam Leroy (`@akleroy `__) - Thomas Robitaille (`@astrofrog `__) - Erik Rosolowsky (`@low-sky `__) - Eric Koch (`@e-koch `__) Build and coverage status ========================= |Build Status| |Coverage Status| |DOI| .. |Join the chat at https://gitter.im/radio-astro-tools/spectral-cube| image:: https://badges.gitter.im/Join%20Chat.svg :target: https://gitter.im/radio-astro-tools/spectral-cube?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge&utm_content=badge .. |Build Status| image:: https://travis-ci.org/radio-astro-tools/spectral-cube.png?branch=master :target: https://travis-ci.org/radio-astro-tools/spectral-cube .. |Coverage Status| image:: https://coveralls.io/repos/radio-astro-tools/spectral-cube/badge.svg?branch=master :target: https://coveralls.io/r/radio-astro-tools/spectral-cube?branch=master .. |DOI| image:: https://zenodo.org/badge/doi/10.5281/zenodo.11485.svg :target: http://dx.doi.org/10.5281/zenodo.11485 ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1633017852.0 spectral-cube-0.6.0/conftest.py0000644000175100001710000000000000000000000015773 0ustar00runnerdocker././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1633017864.7290301 spectral-cube-0.6.0/docs/0000755000175100001710000000000000000000000014536 5ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1633017852.0 spectral-cube-0.6.0/docs/Makefile0000644000175100001710000001520200000000000016176 0ustar00runnerdocker# Makefile for Sphinx documentation # # You can set these variables from the command line. SPHINXOPTS = SPHINXBUILD = sphinx-build PAPER = BUILDDIR = _build # User-friendly check for sphinx-build ifeq ($(shell which $(SPHINXBUILD) >/dev/null 2>&1; echo $$?), 1) $(error The '$(SPHINXBUILD)' command was not found. Make sure you have Sphinx installed, then set the SPHINXBUILD environment variable to point to the full path of the '$(SPHINXBUILD)' executable. Alternatively you can add the directory with the executable to your PATH. If you don't have Sphinx installed, grab it from http://sphinx-doc.org/) endif # Internal variables. PAPEROPT_a4 = -D latex_paper_size=a4 PAPEROPT_letter = -D latex_paper_size=letter ALLSPHINXOPTS = -d $(BUILDDIR)/doctrees $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) . # the i18n builder cannot share the environment and doctrees with the others I18NSPHINXOPTS = $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) . .PHONY: help clean html dirhtml singlehtml pickle json htmlhelp qthelp devhelp epub latex latexpdf text man changes linkcheck doctest gettext help: @echo "Please use \`make ' where is one of" @echo " html to make standalone HTML files" @echo " dirhtml to make HTML files named index.html in directories" @echo " singlehtml to make a single large HTML file" @echo " pickle to make pickle files" @echo " json to make JSON files" @echo " htmlhelp to make HTML files and a HTML help project" @echo " qthelp to make HTML files and a qthelp project" @echo " devhelp to make HTML files and a Devhelp project" @echo " epub to make an epub" @echo " latex to make LaTeX files, you can set PAPER=a4 or PAPER=letter" @echo " latexpdf to make LaTeX files and run them through pdflatex" @echo " latexpdfja to make LaTeX files and run them through platex/dvipdfmx" @echo " text to make text files" @echo " man to make manual pages" @echo " texinfo to make Texinfo files" @echo " info to make Texinfo files and run them through makeinfo" @echo " gettext to make PO message catalogs" @echo " changes to make an overview of all changed/added/deprecated items" @echo " xml to make Docutils-native XML files" @echo " pseudoxml to make pseudoxml-XML files for display purposes" @echo " linkcheck to check all external links for integrity" @echo " doctest to run all doctests embedded in the documentation (if enabled)" clean: rm -rf $(BUILDDIR)/* html: $(SPHINXBUILD) -b html $(ALLSPHINXOPTS) $(BUILDDIR)/html @echo @echo "Build finished. The HTML pages are in $(BUILDDIR)/html." dirhtml: $(SPHINXBUILD) -b dirhtml $(ALLSPHINXOPTS) $(BUILDDIR)/dirhtml @echo @echo "Build finished. The HTML pages are in $(BUILDDIR)/dirhtml." singlehtml: $(SPHINXBUILD) -b singlehtml $(ALLSPHINXOPTS) $(BUILDDIR)/singlehtml @echo @echo "Build finished. The HTML page is in $(BUILDDIR)/singlehtml." pickle: $(SPHINXBUILD) -b pickle $(ALLSPHINXOPTS) $(BUILDDIR)/pickle @echo @echo "Build finished; now you can process the pickle files." json: $(SPHINXBUILD) -b json $(ALLSPHINXOPTS) $(BUILDDIR)/json @echo @echo "Build finished; now you can process the JSON files." htmlhelp: $(SPHINXBUILD) -b htmlhelp $(ALLSPHINXOPTS) $(BUILDDIR)/htmlhelp @echo @echo "Build finished; now you can run HTML Help Workshop with the" \ ".hhp project file in $(BUILDDIR)/htmlhelp." qthelp: $(SPHINXBUILD) -b qthelp $(ALLSPHINXOPTS) $(BUILDDIR)/qthelp @echo @echo "Build finished; now you can run "qcollectiongenerator" with the" \ ".qhcp project file in $(BUILDDIR)/qthelp, like this:" @echo "# qcollectiongenerator $(BUILDDIR)/qthelp/SpectralCube.qhcp" @echo "To view the help file:" @echo "# assistant -collectionFile $(BUILDDIR)/qthelp/SpectralCube.qhc" devhelp: $(SPHINXBUILD) -b devhelp $(ALLSPHINXOPTS) $(BUILDDIR)/devhelp @echo @echo "Build finished." @echo "To view the help file:" @echo "# mkdir -p $$HOME/.local/share/devhelp/SpectralCube" @echo "# ln -s $(BUILDDIR)/devhelp $$HOME/.local/share/devhelp/SpectralCube" @echo "# devhelp" epub: $(SPHINXBUILD) -b epub $(ALLSPHINXOPTS) $(BUILDDIR)/epub @echo @echo "Build finished. The epub file is in $(BUILDDIR)/epub." latex: $(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex @echo @echo "Build finished; the LaTeX files are in $(BUILDDIR)/latex." @echo "Run \`make' in that directory to run these through (pdf)latex" \ "(use \`make latexpdf' here to do that automatically)." latexpdf: $(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex @echo "Running LaTeX files through pdflatex..." $(MAKE) -C $(BUILDDIR)/latex all-pdf @echo "pdflatex finished; the PDF files are in $(BUILDDIR)/latex." latexpdfja: $(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex @echo "Running LaTeX files through platex and dvipdfmx..." $(MAKE) -C $(BUILDDIR)/latex all-pdf-ja @echo "pdflatex finished; the PDF files are in $(BUILDDIR)/latex." text: $(SPHINXBUILD) -b text $(ALLSPHINXOPTS) $(BUILDDIR)/text @echo @echo "Build finished. The text files are in $(BUILDDIR)/text." man: $(SPHINXBUILD) -b man $(ALLSPHINXOPTS) $(BUILDDIR)/man @echo @echo "Build finished. The manual pages are in $(BUILDDIR)/man." texinfo: $(SPHINXBUILD) -b texinfo $(ALLSPHINXOPTS) $(BUILDDIR)/texinfo @echo @echo "Build finished. The Texinfo files are in $(BUILDDIR)/texinfo." @echo "Run \`make' in that directory to run these through makeinfo" \ "(use \`make info' here to do that automatically)." info: $(SPHINXBUILD) -b texinfo $(ALLSPHINXOPTS) $(BUILDDIR)/texinfo @echo "Running Texinfo files through makeinfo..." make -C $(BUILDDIR)/texinfo info @echo "makeinfo finished; the Info files are in $(BUILDDIR)/texinfo." gettext: $(SPHINXBUILD) -b gettext $(I18NSPHINXOPTS) $(BUILDDIR)/locale @echo @echo "Build finished. The message catalogs are in $(BUILDDIR)/locale." changes: $(SPHINXBUILD) -b changes $(ALLSPHINXOPTS) $(BUILDDIR)/changes @echo @echo "The overview file is in $(BUILDDIR)/changes." linkcheck: $(SPHINXBUILD) -b linkcheck $(ALLSPHINXOPTS) $(BUILDDIR)/linkcheck @echo @echo "Link check complete; look for any errors in the above output " \ "or in $(BUILDDIR)/linkcheck/output.txt." doctest: $(SPHINXBUILD) -b doctest $(ALLSPHINXOPTS) $(BUILDDIR)/doctest @echo "Testing of doctests in the sources finished, look at the " \ "results in $(BUILDDIR)/doctest/output.txt." xml: $(SPHINXBUILD) -b xml $(ALLSPHINXOPTS) $(BUILDDIR)/xml @echo @echo "Build finished. The XML files are in $(BUILDDIR)/xml." pseudoxml: $(SPHINXBUILD) -b pseudoxml $(ALLSPHINXOPTS) $(BUILDDIR)/pseudoxml @echo @echo "Build finished. The pseudo-XML files are in $(BUILDDIR)/pseudoxml." ././@PaxHeader0000000000000000000000000000003200000000000010210 xustar0026 mtime=1633017864.72103 spectral-cube-0.6.0/docs/_templates/0000755000175100001710000000000000000000000016673 5ustar00runnerdocker././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1633017864.7290301 spectral-cube-0.6.0/docs/_templates/autosummary/0000755000175100001710000000000000000000000021261 5ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1633017852.0 spectral-cube-0.6.0/docs/_templates/autosummary/base.rst0000644000175100001710000000005200000000000022722 0ustar00runnerdocker{% extends "autosummary_core/base.rst" %} ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1633017852.0 spectral-cube-0.6.0/docs/_templates/autosummary/class.rst0000644000175100001710000000005300000000000023116 0ustar00runnerdocker{% extends "autosummary_core/class.rst" %} ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1633017852.0 spectral-cube-0.6.0/docs/_templates/autosummary/module.rst0000644000175100001710000000005400000000000023277 0ustar00runnerdocker{% extends "autosummary_core/module.rst" %} ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1633017852.0 spectral-cube-0.6.0/docs/accessing.rst0000644000175100001710000000560000000000000017230 0ustar00runnerdockerAccessing data ============== Once you have initialized a :meth:`~spectral_cube.SpectralCube` instance, either directly or by reading in a file, you can easily access the data values and the world coordinate information. Data values ----------- You can access the underlying data using the ``unmasked_data`` array which is a Numpy-like array:: >>> slice_unmasked = cube.unmasked_data[0,:,:] # doctest: +SKIP The order of the dimensions of the ``unmasked_data`` array is deterministic - it is always ``(n_spectral, n_y, n_x)`` irrespective of how the cube was stored on disk. .. note:: The term ``unmasked`` indicates that the data is the raw original data from the file. :meth:`~spectral_cube.SpectralCube` also allows masking of values, which is discussed in :doc:`masking`. If a slice is not specified, the object returned is not strictly a Numpy array, and will not work with all functions outside of the ``spectral_cube`` package that expect Numpy arrays. In order to extract a normal Numpy array, instead specify a mask of ``[:]`` which will force the object to be converted to a Numpy array (the compulsory slicing is necessary in order to avoid memory-related issues with large data cubes). World coordinates ----------------- Given a cube object, it is straightforward to find the coordinates along the spectral axis:: >>> cube.spectral_axis # doctest: +SKIP [ -2.97198762e+03 -2.63992044e+03 -2.30785327e+03 -1.97578610e+03 -1.64371893e+03 -1.31165176e+03 -9.79584583e+02 -6.47517411e+02 ... 3.15629983e+04 3.18950655e+04 3.22271326e+04 3.25591998e+04 3.28912670e+04 3.32233342e+04] m / s The default units of a spectral axis are determined from the FITS header or WCS object used to initialize the cube, but it is also possible to change the spectral axis (see :doc:`manipulating`). More generally, it is possible to extract the world coordinates of all the pixels using the :attr:`~spectral_cube.SpectralCube.world` property, which returns the spectral axis then the two positional coordinates in reverse order (in the same order as the data indices). >>> velo, dec, ra = cube.world[:] # doctest: +SKIP In order to extract coordinates, a slice (such as ``[:]`` above) is required. Using ``[:]`` will return three 3-d arrays with the coordinates for all pixels. Using e.g. ``[0,:,:]`` will return three 2-d arrays of coordinates for the first spectral slice. If you forget to specify a slice, you will get the following error: >>> velo, dec, ra = cube.world # doctest: +SKIP ... Exception: You need to specify a slice (e.g. ``[:]`` or ``[0,:,:]`` in order to access this property. In the case of large data cubes, requesting the coordinates of all pixels would likely be too slow, so the slicing allows you to compute only a subset of the pixel coordinates (see :doc:`big_data` for more information on dealing with large data cubes). ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1633017852.0 spectral-cube-0.6.0/docs/api.rst0000644000175100001710000000127500000000000016046 0ustar00runnerdockerAPI Documentation ================= .. automodapi:: spectral_cube :no-inheritance-diagram: :inherited-members: .. automodapi:: spectral_cube.ytcube :no-inheritance-diagram: :no-inherited-members: .. automodapi:: spectral_cube.io.casa_masks :no-inheritance-diagram: :no-inherited-members: .. automodapi:: spectral_cube.lower_dimensional_structures :no-inheritance-diagram: :no-inherited-members: .. automodapi:: spectral_cube.base_class :no-inheritance-diagram: :inherited-members: .. automodapi:: spectral_cube.spectral_cube :no-inheritance-diagram: :inherited-members: .. automodapi:: spectral_cube.masks :no-inheritance-diagram: :inherited-members: ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1633017852.0 spectral-cube-0.6.0/docs/arithmetic.rst0000644000175100001710000000167700000000000017434 0ustar00runnerdockerSpectral Cube Arithmetic ======================== Simple arithmetic operations between cubes and scalars, broadcastable numpy arrays, and other cubes are possible. However, such operations should be performed with caution because they require loading the whole cube into memory and will generally create a new cube in memory. Examples:: >>> import astropy.units as u >>> from astropy.utils import data >>> fn = data.get_pkg_data_filename('tests/data/example_cube.fits', 'spectral_cube') >>> from spectral_cube import SpectralCube >>> cube = SpectralCube.read(fn) >>> cube2 = cube * 2 >>> cube3 = cube + 1.5 * u.Jy / u.beam >>> cube4 = cube2 + cube3 Each of these cubes is a new cube in memory. Note that for addition and subtraction, the units must be equivalent to those of the cube. Please see :ref:`doc_handling_large_datasets` for details on how to perform arithmetic operations on a small subset of data at a time. ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1633017852.0 spectral-cube-0.6.0/docs/beam_handling.rst0000644000175100001710000000567400000000000020054 0ustar00runnerdockerHandling Beams ============== If you are using radio data, your cubes should have some sort of beam information included. ``spectral-cube`` handles beams using the `radio_beam `_ package. There are two ways beams can be stored in FITS files: as FITS header keywords (``BMAJ``, ``BMIN``, and ``BPA``) or as a ``BinTableHDU`` extension. If the latter is present, ``spectral-cube`` will return a `~spectral_cube.spectral_cube.VaryingResolutionSpectralCube` object. For the simpler case of a single beam across all channels, the presence of the beam allows for direct conversion of a cube with Jy/beam units to surface brightness (K) units. Note, however, that this requires loading the entire cube into memory!:: >>> cube.unit # doctest: +SKIP Unit("Jy / beam") >>> kcube = cube.to(u.K) # doctest: +SKIP >>> kcube.unit # doctest: +SKIP Unit("K") Adding a Beam ------------- If your cube does not have a beam, a custom beam can be attached given:: >>> new_beam = Beam(1. * u.deg) # doctest: +SKIP >>> new_cube = cube.with_beam(new_beam) # doctest: +SKIP >>> new_cube.beam # doctest: +SKIP Beam: BMAJ=3600.0 arcsec BMIN=3600.0 arcsec BPA=0.0 deg This is handy for synthetic observations, which initially have a point-like beam:: >>> point_beam = Beam(0 * u.deg) # doctest: +SKIP >>> new_cube = synth_cube.with_beam(point_beam) # doctest: +SKIP Beam: BMAJ=0.0 arcsec BMIN=0.0 arcsec BPA=0.0 deg The cube can then be convolved to a new resolution:: >>> new_beam = Beam(60 * u.arcsec) # doctest: +SKIP >>> conv_synth_cube = synth_cube.convolve_to(new_beam) # doctest: +SKIP >>> conv_synth_cube.beam # doctest: +SKIP Beam: BMAJ=60.0 arcsec BMIN=60.0 arcsec BPA=0.0 deg Beam can also be attached in the same way for `~spectral_cube.Projection` and `~spectral_cube.Slice` objects. Multi-beam cubes ---------------- Varying resolution (multi-beam) cubes are somewhat trickier to work with in general, though unit conversion is easy. You can perform the same sort of unit conversion with `~spectral_cube.spectral_cube.VaryingResolutionSpectralCube` s as with regular `~spectral_cube.spectral_cube.SpectralCube` s; ``spectral-cube`` will use a different beam and frequency for each plane. You can identify channels with bad beams (i.e., beams that differ from a reference beam, which by default is the median beam) using `~spectral_cube.spectral_cube.VaryingResolutionSpectralCube.identify_bad_beams` (the returned value is a mask array where ``True`` means the channel is good), mask channels with undesirable beams using `~spectral_cube.spectral_cube.VaryingResolutionSpectralCube.mask_out_bad_beams`, and in general mask out individual channels using `~spectral_cube.spectral_cube.VaryingResolutionSpectralCube.mask_channels`. For other sorts of operations, discussion of how to deal with these cubes via smoothing to a common resolution is in the :doc:`smoothing` document. ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1633017852.0 spectral-cube-0.6.0/docs/big_data.rst0000644000175100001710000001276500000000000017035 0ustar00runnerdocker.. _doc_handling_large_datasets: Handling large datasets ======================= .. currentmodule:: spectral_cube .. TODO: we can move things specific to large data and copying/referencing here. The :class:`SpectralCube` class is designed to allow working with files larger than can be stored in memory. To take advantage of this and work effectively with large spectral cubes, you should keep the following three ideas in mind: - Work with small subsets of data at a time. - Minimize data copying. - Minimize the number of passes over the data. Work with small subsets of data at a time ----------------------------------------- Numpy supports a *memory-mapping* mode which means that the data is stored on disk and the array elements are only loaded into memory when needed. ``spectral_cube`` takes advantage of this if possible, to avoid loading large files into memory. Typically, working with NumPy involves writing code that operates on an entire array at once. For example:: x = y = np.sum(np.abs(x * 3 + 10), axis=0) Unfortunately, this code creates several temporary arrays whose size is equal to ``x``. This is infeasible if ``x`` is a large memory-mapped array, because an operation like ``(x * 3)`` will require more RAM than exists on your system. A better way to compute y is by working with a single slice of ``x`` at a time:: y = np.zeros_like(x[0]) for plane in x: y += np.abs(plane * 3 + 10) Many methods in :class:`SpectralCube` allow you to extract subsets of relevant data, to make writing code like this easier: - `~spectral_cube.base_class.MaskableArrayMixinClass.filled_data`, `~spectral_cube.BaseSpectralCube.unmasked_data`, `~spectral_cube.base_class.SpatialCoordMixinClass.world` all accept Numpy style slice syntax. For example, ``cube.filled_data[0:3, :, :]`` returns only the first 3 spectral channels of the cube, with masked elements replaced with ``cube.fill_value``. - :class:`~spectral_cube.SpectralCube` itself can be sliced to extract subcubes - `~spectral_cube.base_class.BaseSpectralCube.spectral_slab` extracts a subset of spectral channels. Many methods in :class:`~spectral_cube.SpectralCube` iterate over smaller chunks of data, to avoid large memory allocations when working with big cubes. Some of these have a ``how`` keyword parameter, for fine-grained control over how much memory is accessed at once. ``how='cube'`` works with the entire array in memory, ``how='slice'`` works with one slice at a time, and ``how='ray'`` works with one ray at a time. As a user, your best strategy for working with large datasets is to rely on builtin methods to :class:`~spectral_cube.SpectralCube`, and to access data from `~spectral_cube.base_class.MaskableArrayMixinClass.filled_data` and `~spectral_cube.BaseSpectralCube.unmasked_data` in smaller chunks if possible. .. warning :: At the moment, :meth:`~SpectralCube.argmax` and :meth:`~SpectralCube.argmin`, are **not** optimized for handling large datasets. Minimize Data Copying --------------------- Methods in :class:`~spectral_cube.SpectralCube` avoid copying as much as possible. For example, all of the following operations create new cubes or masks without copying any data:: >>> mask = cube > 3 # doctest: +SKIP >>> slab = cube.spectral_slab(...) # doctest: +SKIP >>> subcube = cube[0::2, 10:, 0:30] # doctest: +SKIP >>> cube2 = cube.with_fill(np.nan) # doctest: +SKIP >>> cube2 = cube.apply_mask(mask) # doctest: +SKIP Minimize the number of passes over the data ------------------------------------------- Accessing memory-mapped arrays is much slower than a normal array, due to the overhead of reading from disk. Because of this, it is more efficient to perform computations that iterate over the data as few times as possible. An even subtler issue pertains to how the 3D or 4D spectral cube is arranged as a 1D sequence of bytes in a file. Data access is much faster when it corresponds to a single contiguous scan of bytes on disk. For more information on this topic, see `this tutorial on Numpy strides `_. Recipe for large cube operations that can't be done in memory ------------------------------------------------------------- Sometimes, you will need to run full-cube operations that can't be done in memory and can't be handled by spectral-cube's built in operations. An example might be converting your cube from Jy/beam to K when you have a very large (e.g., >10GB) cube. Handling this sort of situation requires several manual steps. First, hard drive space needs to be allocated for the output data. Then, the cube must be manually looped over using a strategy that holds only limited data in memory.:: >>> import shutil # doctest: +SKIP >>> from spectral_cube import SpectralCube # doctest: +SKIP >>> from astropy.io import fits # doctest: +SKIP >>> cube = SpectralCube.read('file.fits') # doctest: +SKIP >>> # this copy step is necessary to allocate memory for the output >>> shutil.copy('file.fits', 'newfile.fits') # doctest: +SKIP >>> outfh = fits.open('newfile.fits', mode='update') # doctest: +SKIP >>> jtok_factors = cube.jtok_factors() # doctest: +SKIP >>> for index,(slice,factor) in enumerate(zip(cube,factors)): # doctest: +SKIP ... outfh[0].data[index] = slice * factor # doctest: +SKIP ... outfh.flush() # write the data to disk # doctest: +SKIP >>> outfh[0].header['BUNIT'] = 'K' # doctest: +SKIP >>> outfh.flush() # doctest: +SKIP ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1633017852.0 spectral-cube-0.6.0/docs/conf.py0000644000175100001710000001606200000000000016042 0ustar00runnerdocker# -*- coding: utf-8 -*- # Licensed under a 3-clause BSD style license - see LICENSE.rst # # Astropy documentation build configuration file. # # This file is execfile()d with the current directory set to its containing dir. # # Note that not all possible configuration values are present in this file. # # All configuration values have a default. Some values are defined in # the global Astropy configuration which is loaded here before anything else. # See astropy.sphinx.conf for which values are set there. # If extensions (or modules to document with autodoc) are in another directory, # add these directories to sys.path here. If the directory is relative to the # documentation root, use os.path.abspath to make it absolute, like shown here. # sys.path.insert(0, os.path.abspath('..')) # IMPORTANT: the above commented section was generated by sphinx-quickstart, but # is *NOT* appropriate for astropy or Astropy affiliated packages. It is left # commented out with this explanation to make it clear why this should not be # done. If the sys.path entry above is added, when the astropy.sphinx.conf # import occurs, it will import the *source* version of astropy instead of the # version installed (if invoked as "make html" or directly with sphinx), or the # version in the build directory (if "python setup.py build_sphinx" is used). # Thus, any C-extensions that are needed to build the documentation will *not* # be accessible, and the documentation will not build correctly. import os import sys import datetime from importlib import import_module try: from sphinx_astropy.conf.v1 import * # noqa except ImportError: print('ERROR: the documentation requires the sphinx-astropy package to be installed') sys.exit(1) # Get configuration information from setup.cfg from configparser import ConfigParser conf = ConfigParser() conf.read([os.path.join(os.path.dirname(__file__), '..', 'setup.cfg')]) setup_cfg = dict(conf.items('metadata')) # -- General configuration ---------------------------------------------------- # By default, highlight as Python 3. highlight_language = 'python3' # If your documentation needs a minimal Sphinx version, state it here. #needs_sphinx = '1.2' # To perform a Sphinx version check that needs to be more specific than # major.minor, call `check_sphinx_version("x.y.z")` here. # check_sphinx_version("1.2.1") # List of patterns, relative to source directory, that match files and # directories to ignore when looking for source files. exclude_patterns.append('_templates') # This is added to the end of RST files - a good place to put substitutions to # be used globally. rst_epilog += """ """ # -- Project information ------------------------------------------------------ # This does not *have* to match the package name, but typically does project = setup_cfg['name'] author = setup_cfg['author'] copyright = '{0}, {1}'.format( datetime.datetime.now().year, setup_cfg['author']) # The version info for the project you're documenting, acts as replacement for # |version| and |release|, also used in various other places throughout the # built documents. from pkg_resources import get_distribution version = release = get_distribution(setup_cfg['name']).version # -- Options for HTML output -------------------------------------------------- # A NOTE ON HTML THEMES # The global astropy configuration uses a custom theme, 'bootstrap-astropy', # which is installed along with astropy. A different theme can be used or # the options for this theme can be modified by overriding some of the # variables set in the global configuration. The variables set in the # global configuration are listed below, commented out. # Add any paths that contain custom themes here, relative to this directory. # To use a different custom theme, add the directory containing the theme. #html_theme_path = [] # The theme to use for HTML and HTML Help pages. See the documentation for # a list of builtin themes. To override the custom theme, set this to the # name of a builtin theme or the name of a custom theme in html_theme_path. #html_theme = None html_theme_options = { 'logotext1': 'spectral-cube', # white, semi-bold 'logotext2': '', # orange, light 'logotext3': ':docs' # white, light } # Custom sidebar templates, maps document names to template names. #html_sidebars = {} # The name of an image file (relative to this directory) to place at the top # of the sidebar. #html_logo = '' # The name of an image file (within the static path) to use as favicon of the # docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32 # pixels large. #html_favicon = '' # If not '', a 'Last updated on:' timestamp is inserted at every page bottom, # using the given strftime format. #html_last_updated_fmt = '' # The name for this set of Sphinx documents. If None, it defaults to # " v documentation". html_title = '{0} v{1}'.format(project, release) # Output file base name for HTML help builder. htmlhelp_basename = project + 'doc' # -- Options for LaTeX output ------------------------------------------------- # Grouping the document tree into LaTeX files. List of tuples # (source start file, target name, title, author, documentclass [howto/manual]). latex_documents = [('index', project + '.tex', project + u' Documentation', author, 'manual')] # -- Options for manual page output ------------------------------------------- # One entry per manual page. List of tuples # (source start file, name, description, authors, manual section). man_pages = [('index', project.lower(), project + u' Documentation', [author], 1)] # -- Options for the edit_on_github extension --------------------------------- if eval(setup_cfg.get('edit_on_github')): extensions += ['sphinx_astropy.ext.edit_on_github'] versionmod = __import__(setup_cfg['package_name'] + '.version') edit_on_github_project = setup_cfg['github_project'] if versionmod.version.release: edit_on_github_branch = "v" + versionmod.version.version else: edit_on_github_branch = "master" edit_on_github_source_root = "" edit_on_github_doc_root = "docs" # -- Resolving issue number to links in changelog ----------------------------- github_issues_url = 'https://github.com/{0}/issues/'.format(setup_cfg['github_project']) # -- Turn on nitpicky mode for sphinx (to warn about references not found) ---- # # nitpicky = True # nitpick_ignore = [] # # Some warnings are impossible to suppress, and you can list specific references # that should be ignored in a nitpick-exceptions file which should be inside # the docs/ directory. The format of the file should be: # # # # for example: # # py:class astropy.io.votable.tree.Element # py:class astropy.io.votable.tree.SimpleElement # py:class astropy.io.votable.tree.SimpleElementWithContent # # Uncomment the following lines to enable the exceptions: # # for line in open('nitpick-exceptions'): # if line.strip() == "" or line.startswith("#"): # continue # dtype, target = line.split(None, 1) # target = target.strip() # nitpick_ignore.append((dtype, six.u(target))) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1633017852.0 spectral-cube-0.6.0/docs/continuum_subtraction.rst0000644000175100001710000000416600000000000021735 0ustar00runnerdockerContinuum Subtraction ===================== A common task with data cubes is continuum identification and subtraction. For line-rich cubes where the continuum is difficult to identify, you should use `statcont `_. For single-line cubes, the process is much easier. First, the simplest case is when you have a single line that makes up a small fraction of the total observed band, e.g., a narrow line. In this case, you can use a simple median approximation for the continuum.:: >>> med = cube.median(axis=0) # doctest: +SKIP >>> med_sub_cube = cube - med # doctest: +SKIP The second part of this task may complain that the cube is too big. If it does, you can still do the above operation by first setting ``cube.allow_huge_operations=True``, but be warned that this can be expensive. For a more complicated case, you may want to mask out the line-containing channels. This can be done using a spectral boolean mask.:: >>> from astropy import units as u # doctest: +SKIP >>> import numpy as np # doctest: +SKIP >>> spectral_axis = cube.with_spectral_unit(u.km/u.s).spectral_axis # doctest: +SKIP >>> good_channels = (spectral_axis < 25*u.km/u.s) | (spectral_axis > 45*u.km/u.s) # doctest: +SKIP >>> masked_cube = cube.with_mask(good_channels[:, np.newaxis, np.newaxis]) # doctest: +SKIP >>> med = masked_cube.median(axis=0) # doctest: +SKIP >>> med_sub_cube = cube - med # doctest: +SKIP The array ``good_channels`` is a simple 1D numpy boolean array that is ``True`` for all channels below 25 km/s and above 45 km/s, and is ``False`` for all channels in the range 25-45 km/s. The indexing trick ``good_channels[:, np.newaxis, np.newaxis]`` (or equivalently, ``good_channels[:, None, None]``) is just a way to tell the cube which axes to project along. In more recent versions of ``spectral-cube``, the indexing trick is not necessary. The median in this case is computed only over the specified line-free channels. Any operation can be used to compute the continuum, such as the ``mean`` or some ``percentile``, but for most use cases, the ``median`` is fine. ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1633017852.0 spectral-cube-0.6.0/docs/creating_reading.rst0000644000175100001710000000602300000000000020556 0ustar00runnerdockerCreating/reading spectral cubes =============================== Importing --------- The :class:`~spectral_cube.SpectralCube` class is used to represent 3-dimensional datasets (two positional dimensions and one spectral dimension) with a World Coordinate System (WCS) projection that describes the mapping from pixel to world coordinates and vice-versa. The class is imported with:: >>> from spectral_cube import SpectralCube Reading from a file ------------------- In most cases, you are likely to read in an existing spectral cube from a file. The reader is designed to be able to deal with any arbitrary axis order and always return a consistently oriented spectral cube (see :doc:`accessing`). To read in a file, use the :meth:`~spectral_cube.SpectralCube.read` method as follows:: >>> cube = SpectralCube.read('L1448_13CO.fits') # doctest: +SKIP This will always read the Stokes I parameter in the file. For information on accessing other Stokes parameters, see :doc:`stokes`. .. note:: In most cases, the FITS reader should be able to open the file in *memory-mapped* mode, which means that the data is not immediately read, but is instead read as needed when data is accessed. This allows large files (including larger than memory) to be accessed. However, note that certain FITS files cannot be opened in memory-mapped mode, in particular compressed (e.g. ``.fits.gz``) files. See the :doc:`big_data` page for more details about dealing with large data sets. Direct Initialization --------------------- If you are interested in directly creating a :class:`~spectral_cube.SpectralCube` instance, you can do so using a 3-d Numpy-like array with a 3-d :class:`~astropy.wcs.WCS` object:: >>> cube = SpectralCube(data=data, wcs=wcs) # doctest: +SKIP Here ``data`` can be any Numpy-like array, including *memory-mapped* Numpy arrays (as mentioned in `Reading from a file`_, memory-mapping is a technique that avoids reading the whole file into memory and instead accessing it from the disk as needed). Hacks for simulated data ------------------------ If you're working on synthetic images or simulated data, where a location on the sky is not relevant (but the frequency/wavelength axis still is!), a hack is required to set up the `world coordinate system `_. The header should be set up such that the projection is cartesian, i.e.:: CRVAL1 = 0 CTYPE1 = 'RA---CAR' CRVAL2 = 0 CTYPE2 = 'DEC--CAR' CDELT1 = 1.0e-4 //degrees CDELT2 = 1.0e-4 //degrees CUNIT1 = 'deg' CUNIT2 = 'deg' Note that the x/y axes must always have angular units (i.e., degrees). If your data are really in physical units, you should note that in the header in other comments, but ``spectral-cube`` doesn't care about this. If the frequency axis is irrelevant, ``spectral-cube`` is probably not the right tool to use; instead you should use `astropy.io.fits `_ or some other file reader directly. ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1633017852.0 spectral-cube-0.6.0/docs/dask.rst0000644000175100001710000004272700000000000016226 0ustar00runnerdockerIntegration with dask ===================== Getting started --------------- When loading a cube with the :class:`~spectral_cube.SpectralCube` class, it is possible to optionally specify the ``use_dask`` keyword argument to control whether or not to use new experimental classes (:class:`~spectral_cube.DaskSpectralCube` and :class:`~spectral_cube.DaskVaryingResolutionSpectralCube`) that use `dask `_ for representing cubes and carrying out computations efficiently. The default is ``use_dask=True`` when reading in CASA spectral cubes, but not when loading cubes from other formats. To read in a FITS cube using the dask-enabled classes, you can do:: >>> from astropy.utils import data >>> from spectral_cube import SpectralCube >>> fn = data.get_pkg_data_filename('tests/data/example_cube.fits', 'spectral_cube') >>> cube = SpectralCube.read(fn, use_dask=True) >>> cube DaskSpectralCube with shape=(7, 4, 3) and unit=Jy / beam and chunk size (7, 4, 3): n_x: 3 type_x: RA---ARC unit_x: deg range: 52.231466 deg: 52.231544 deg n_y: 4 type_y: DEC--ARC unit_y: deg range: 31.243639 deg: 31.243739 deg n_s: 7 type_s: VRAD unit_s: m / s range: 14322.821 m / s: 14944.909 m / s Most of the properties and methods that normally work with :class:`~spectral_cube.SpectralCube` should continue to work with :class:`~spectral_cube.DaskSpectralCube`. Schedulers and parallel computations ------------------------------------ By default, we use the ``'synchronous'`` `dask scheduler `_ which means that calculations are run in a single process/thread. However, you can control this using the :meth:`~spectral_cube.DaskSpectralCube.use_dask_scheduler` method: >>> cube.use_dask_scheduler('threads') # doctest: +IGNORE_OUTPUT Any calculation after this will then use the multi-threaded scheduler. It is also possible to use this as a context manager, to temporarily change the scheduler:: >>> with cube.use_dask_scheduler('threads'): # doctest: +IGNORE_OUTPUT ... cube.max() You can optionally specify the number of threads/processes to use with ``num_workers``:: >>> with cube.use_dask_scheduler('threads', num_workers=4): # doctest: +IGNORE_OUTPUT ... cube.max() If you don't specify the number of threads, this could end up being quite large, and cause you to run out of memory for certain operations. If you want to use `dask.distributed `_ you will need to make sure you pass the client to the :meth:`~spectral_cube.DaskSpectralCube.use_dask_scheduler` method, e.g.:: >>> from dask.distributed import Client, LocalCluster # doctest: +SKIP >>> cluster = LocalCluster(n_workers=4, ... threads_per_worker=4, ... memory_limit='10GB') # doctest: +SKIP >>> client = Client(cluster) # doctest: +SKIP >>> cube = SpectralCube.read(...) # doctest: +SKIP >>> cube.use_dask_scheduler(client) # doctest: +SKIP If you run into the following error when using `dask.distributed`_:: An attempt has been made to start a new process before the current process has finished its bootstrapping phase. This probably means that you are not using fork to start your child processes and you have forgotten to use the proper idiom in the main module: if __name__ == '__main__': freeze_support() You should place the main code in your script inside:: if __name__ == '__main__': .. note:: Running operations in parallel may sometimes be less efficient than running them in serial depending on how your data is stored, so don't assume that it will always be faster. If you want to see a progress bar when carrying out calculations, you can make use of the `dask.diagnostics `_ sub-package - run the following at the start of your script/session, and all subsequent calculations will display a progress bar: >>> from dask.diagnostics import ProgressBar >>> pbar = ProgressBar() >>> pbar.register() >>> cube.max() # doctest: +IGNORE_OUTPUT [########################################] | 100% Completed | 0.1s Performance benefits of using dask spectral cube classes Saving intermediate results to disk ----------------------------------- When calling methods such as for example :meth:`~spectral_cube.DaskSpectralCube.convolve_to` or any other methods that return a cube, the result is not immediately calculated - instead, the result is only computed when data is accessed directly (for example via `~spectral_cube.DaskSpectralCube.filled_data`), or when writing the cube to disk, for example as a FITS file. However, when doing several operations in a row, such as spectrally smoothing the cube, then spatially smoothing it, it can be more efficient to store intermediate results to disk. All methods that return a cube can therefore take the ``save_to_tmp_dir`` option (defaulting to `False`) which can be set to `True` to compute the result of the operation immediately, save it to a temporary directory, and re-read it immediately from disk (for users interested in how the data is stored, it is stored as a `zarr `_ dataset):: >>> cube_new = cube.sigma_clip_spectrally(3, save_to_tmp_dir=True) # doctest: +IGNORE_OUTPUT [########################################] | 100% Completed | 0.1s >>> cube_new DaskSpectralCube with shape=(7, 4, 3) and unit=Jy / beam and chunk size (7, 4, 3): n_x: 3 type_x: RA---ARC unit_x: deg range: 52.231466 deg: 52.231544 deg n_y: 4 type_y: DEC--ARC unit_y: deg range: 31.243639 deg: 31.243739 deg n_s: 7 type_s: VRAD unit_s: m / s range: 14322.821 m / s: 14944.909 m / s Note that this requires the `zarr`_ and `fsspec `_ packages to be installed. This can also be beneficial if you are using multiprocessing or multithreading to carry out calculations, because zarr works nicely with disk access from different threads and processes. Rechunking data --------------- In some cases, the way the data is chunked on disk may be inefficient (for example large CASA datasets may be chunked into tens of thousands of blocks), which may make dask operations slow due to the size of the tree. To get around this, you can use the :meth:`~spectral_cube.DaskSpectralCube.rechunk` method with the ``save_to_tmp_dir`` option mentioned above, which will rechunk the data to disk and make subsequent operations more efficient - either by letting dask choose the new chunk size:: >>> cube_new = cube.rechunk(save_to_tmp_dir=True) # doctest: +IGNORE_OUTPUT [########################################] | 100% Completed | 0.1s >>> cube_new DaskSpectralCube with shape=(7, 4, 3) and unit=Jy / beam and chunk size (7, 4, 3): n_x: 3 type_x: RA---ARC unit_x: deg range: 52.231466 deg: 52.231544 deg n_y: 4 type_y: DEC--ARC unit_y: deg range: 31.243639 deg: 31.243739 deg n_s: 7 type_s: VRAD unit_s: m / s range: 14322.821 m / s: 14944.909 m / s or by specifying it explicitly:: >>> cube_new = cube.rechunk(chunks=(2, 2, 2), save_to_tmp_dir=True) # doctest: +IGNORE_OUTPUT [########################################] | 100% Completed | 0.1s >>> cube_new DaskSpectralCube with shape=(7, 4, 3) and unit=Jy / beam and chunk size (2, 2, 2): n_x: 3 type_x: RA---ARC unit_x: deg range: 52.231466 deg: 52.231544 deg n_y: 4 type_y: DEC--ARC unit_y: deg range: 31.243639 deg: 31.243739 deg n_s: 7 type_s: VRAD unit_s: m / s range: 14322.821 m / s: 14944.909 m / s While the :meth:`~spectral_cube.DaskSpectralCube.rechunk` method can be used without the ``save_to_tmp_dir=True`` option, which then just adds the rechunking to the dask tree, doing so is unlikely to lead in performance gains. A common scenario for rechunking is if you plan to do mostly operations that collapse along the spectral axis, for example computing moment maps. In this case you can use:: >>> cube_new = cube.rechunk(chunks=(-1, 'auto', 'auto'), save_to_tmp_dir=True) # doctest: +IGNORE_OUTPUT [########################################] | 100% Completed | 0.1s which will rechunk the data into cubes that span the full spectral axis but will be chunked in the image plane. And a complementary case is if you plan to do operations to each image plane, such as spatial convolution, in which case you can divide the data into spectral chunks that span the whole of the image dimensions:: >>> cube_new = cube.rechunk(chunks=('auto', -1, -1), save_to_tmp_dir=True) # doctest: +IGNORE_OUTPUT [########################################] | 100% Completed | 0.1s Performance benefits of dask classes ------------------------------------ The :class:`~spectral_cube.DaskSpectralCube` class provides in general better performance than the regular :class:`~spectral_cube.SpectralCube` class. As an example, we take a look at a spectral cube in FITS format for which we want to determine the continuum using sigma clipping. When doing this in serial mode, we already see improvements in performance - first we show the regular spectral cube capabilities without dask:: >>> from spectral_cube import SpectralCube >>> cube_plain = SpectralCube.read('large_spectral_cube.fits') # doctest: +SKIP >>> %time cube_plain.sigma_clip_spectrally(1) # doctest: +SKIP ... CPU times: user 5min 58s, sys: 38 s, total: 6min 36s Wall time: 6min 37s and using the :class:`~spectral_cube.DaskSpectralCube` class:: >>> cube_dask = SpectralCube.read('large_spectral_cube.fits', use_dask=True) # doctest: +SKIP >>> %time cube_dask.sigma_clip_spectrally(1, save_to_tmp_dir=True) # doctest: +SKIP ... CPU times: user 51.7 s, sys: 1.29 s, total: 52.9 s Wall time: 51.5 s Using the parallel options mentioned above results in even better performance:: >>> cube_dask.use_dask_scheduler('threads', num_workers=4) # doctest: +SKIP >>> %time cube_dask.sigma_clip_spectrally(1, save_to_tmp_dir=True) # doctest: +SKIP ... CPU times: user 1min 9s, sys: 1.44 s, total: 1min 11s Wall time: 18.5 s In this case, the wall time is 3x faster (and 21x faster than the regular spectral cube class without dask). Applying custom functions to cubes ---------------------------------- Like the :class:`~spectral_cube.SpectralCube` class, the :class:`~spectral_cube.DaskSpectralCube` and :class:`~spectral_cube.DaskVaryingResolutionSpectralCube` classes have methods for applying custom functions to all the spectra or all the spatial images in a cube: :meth:`~spectral_cube.DaskSpectralCube.apply_function_parallel_spectral` and :meth:`~spectral_cube.DaskSpectralCube.apply_function_parallel_spatial`. By default, these methods take functions that apply to individual spectra or images, but this can be quite slow for large spectral cubes. If possible, you should consider supplying a function that can accept 3-d cubes and operate on all spectra or image slices in a vectorized way. To demonstrate this, we will read in a mid-sized CASA dataset with 623 channels and 768x768 pixels in the image plane:: >>> large = SpectralCube.read('large_spectral_cube.image', format='casa_image', use_dask=True) # doctest: +SKIP >>> large # doctest: +SKIP DaskVaryingResolutionSpectralCube with shape=(623, 768, 768) and unit=Jy / beam: n_x: 768 type_x: RA---SIN unit_x: deg range: 290.899389 deg: 290.932404 deg n_y: 768 type_y: DEC--SIN unit_y: deg range: 14.501466 deg: 14.533425 deg n_s: 623 type_s: FREQ unit_s: Hz range: 216201517973.483 Hz:216277445708.200 Hz As an example, we will apply sigma clipping to all spectra in the cube. Note that there is a method to do this (:meth:`~spectral_cube.DaskSpectralCube.sigma_clip_spectrally`) but for the purposes of demonstration, we will set up the function ourselves and apply it with :meth:`~spectral_cube.DaskSpectralCube.apply_function_parallel_spectral`. We will use the :func:`~astropy.stats.sigma_clip` function from astropy:: >>> from astropy.stats import sigma_clip By default, this function returns masked arrays, but to apply this to our spectral cube, we need it to return a plain Numpy array with NaNs for the masked values. In addition, the original function tends to return warnings we want to silence, so we can do this here too:: >>> import warnings >>> import numpy as np >>> def sigma_clip_with_nan(*args, **kwargs): ... with warnings.catch_warnings(): ... warnings.simplefilter('ignore') ... return sigma_clip(*args, axis=0, **kwargs).filled(np.nan) The ``axis=0`` is so that if the function is passed a cube, it will still work properly. Let's now call :meth:`~spectral_cube.DaskSpectralCube.apply_function_parallel_spectral`, including the ``save_to_tmp_dir`` option mentioned previously to force the calculation and the storage of the result to disk:: >>> clipped_cube = large.apply_function_parallel_spectral(sigma_clip_with_nan, sigma=3, ... save_to_tmp_dir=True) # doctest: +SKIP [########################################] | 100% Completed | 1min 42.3s The ``sigma`` argument is passed to the ``sigma_clip_with_nan`` function. We now call this again but specifying that the ``sigma_clip_with_nan`` function can also take cubes, using the ``accepts_chunks=True`` option (note that for this to work properly, the wrapped function needs to include ``axis=0`` in the call to :func:`~astropy.stats.sigma_clip` as shown above):: >>> clipped_cube = large.apply_function_parallel_spectral(sigma_clip_with_nan, sigma=3, ... accepts_chunks=True, ... save_to_tmp_dir=True) # doctest: +SKIP [########################################] | 100% Completed | 56.8s This leads to an improvement in performance of 1.8x in this case. Efficient global statistics --------------------------- If you are interested in computing a number of global statistics (e.g. min, max, mean) for a whole cube, and want to avoid separate calls which would lead to the data being read each time, it is also possible to compute these statistics in a way that each chunk is accessed only once - this is done via the :meth:`~spectral_cube.DaskSpectralCube.statistics` method which returns a dictionary of statistics, which are named using the same convention as CASA's `ia.statistics `_:: >>> stats = cube.statistics() # doctest: +IGNORE_OUTPUT >>> sorted(stats) ['max', 'mean', 'min', 'npts', 'rms', 'sigma', 'sum', 'sumsq'] >>> stats['min'] >>> stats['mean'] This method should respect the current scheduler, so you may be able to get better performance with a multi-threaded scheduler. Reading in CASA data and default chunk size ------------------------------------------- CASA image datasets are typically stored on disk with very small chunks - if we mapped these directly to dask array chunks, this would be very inefficient as the `dask task graph `_ would then contain in some cases tens of thousands of chunks, and because reading the data from disk would be very inefficient as only small amounts of data would be read at a time. To avoid this, the CASA loader for :class:`~spectral_cube.DaskSpectralCube` makes use of the `casa-formats-io `_ package to combine neighboring chunks on disk into a single chunk. The final chunk size is chosen by casa-formats-io by default, but it is also possible to control this by using the ``target_chunksize`` argument to the :meth:`~spectral_cube.DaskSpectralCube.read` method:: >>> cube = SpectralCube.read('spectral_cube.image', format='casa_image', ... target_chunksize=1000000, use_dask=True) # doctest: +SKIP The chunk size is in number of elements, so assuming 64-bit floating point data, a target chunk size of 1000000 translates to a chunk size in memory of 8Mb. The target chunk size is interpreted as a maximum chunk size, so the largest possible chunk size smaller or equal to this limit is used. The chunks on disk are combined along the x image direction, then y, and then spectral - this cannot be customized since this is dependent on how the chunks are organized on disk. There is no single value of ``target_chunksize`` that will be optimal for all use cases - in general the chunk size should ideally be large enough that the I/O is not inefficient and that there are not too many chunks in the final cube, but at the same time, when dealing with cubes larger than memory, it is important that the chunks cover only part of the image plane - if chunks were combined such there there was only one chunk in the x and y directions, then any operation that requires rechunking so that there is only one chunk in the spectral dimension (such as spectral sigma clipping) would result in the whole cube being loaded. The default value is 1000000 - which produces 8Mb chunks - large enough that a large 40Gb cube would have 5000 chunks but small enough that even if 100 such chunks are combined in e.g. the spectral dimension, the memory usage is still reasonable (800Mb). ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1633017852.0 spectral-cube-0.6.0/docs/errors.rst0000644000175100001710000001171000000000000016604 0ustar00runnerdocker.. doctest-skip-all Explanations of commonly-encountered error messages =================================================== Beam parameters differ ---------------------- If you are using spectral cubes produced by CASA's tclean, it may have a different *beam size* for each channel. In this case, it will be loaded as a `~spectral_cube.VaryingResolutionSpectralCube` object. If you perform any operations spanning the spectral axis, for example ``cube.moment0(axis=0)`` or ``cube.max(axis=0)``, you may encounter errors like this one: .. code:: Beam srs differ by up to 1.0x, which is greater than the threshold 0.01. This occurs if the beam sizes are different by more than the specified threshold factor. A default threshold of 1% is set because for most interferometric images, beam differences on this scale are negligible (they correspond to flux measurement errors of 10^-4). To inspect the beam properties, look at the ``beams`` attribute, for example: .. code:: >>> cube.beams [Beam: BMAJ=1.1972888708114624 arcsec BMIN=1.0741511583328247 arcsec BPA=72.71219635009766 deg, Beam: BMAJ=1.1972869634628296 arcsec BMIN=1.0741279125213623 arcsec BPA=72.71561431884766 deg, Beam: BMAJ=1.1972919702529907 arcsec BMIN=1.0741302967071533 arcsec BPA=72.71575164794922 deg, ... Beam: BMAJ=1.1978825330734253 arcsec BMIN=1.0744788646697998 arcsec BPA=72.73623657226562 deg, Beam: BMAJ=1.1978733539581299 arcsec BMIN=1.0744799375534058 arcsec BPA=72.73489379882812 deg, Beam: BMAJ=1.197875738143921 arcsec BMIN=1.0744699239730835 arcsec BPA=72.73745727539062 deg] In this example, the beams differ by a tiny amount that is below the threshold. However, sometimes you will encounter cubes with dramatically different beam sizes, and spectral-cube will prevent you from performing operations along the spectral axis with these beams because such operations are poorly defined. There are several options to manage this problem: 1. Increase the threshold. This is best done if the beams still differ by a small amount, but larger than 1%. To do this, set ``cube.beam_threshold = [new value]``. This is the `"tape over the check engine light" `_ approach; use with caution. 2. Convolve the cube to a common resolution using `~spectral_cube.SpectralCube.convolve_to`. This is again best if the largest beam is only slightly larger than the smallest. 3. Mask out the bad channels. For example: .. code:: good_beams = cube.identify_bad_beams(threshold=0.1) mcube = cube.mask_out_bad_beams(threshold=0.1) Moment-2 or FWHM calculations give unexpected NaNs -------------------------------------------------- It is fairly common to have moment 2 calculations return NaN values along pixels where real values are expected, e.g., along pixels where both moment0 and moment1 return real values. Most commonly, this is caused by "bad baselines", specifically, by large sections of the spectrum being slightly negative at large distances from the centroid position (the moment 1 position). Because moment 2 weights pixels at larger distances more highly (as the square of the distance), slight negative values at large distances can result in negative values entering the square root when computing the line width or the FWHM. The solution is either to make a tighter mask, excluding the pixels far from the centroid position, or to ensure that the baseline does not have any negative systematic offset. Looking at images with matplotlib --------------------------------- Matplotlib accesses a lot of hidden properties of arrays when plotting. If you try to show a slice with ``imshow``, you may encounter the WCS-related error:: NotImplementedError: Reversing an axis is not implemented. If you see this error, the only solution at present is to specify ``origin='lower'``, which is the standard for images anyway. For example:: import pylab as pl pl.imshow(cube[5,:,:], origin='lower') should work, where ``origin='upper'`` will not. This is due to a limitation in ``astropy.wcs`` slicing. An alternative option, if it is absolutely necessary to use ``origin='upper'`` or if you encounter other matplotlib-related issues, is to use the ``.value`` attribute of the slice to get a bald numpy array to plot:: import pylab as pl pl.imshow(cube[5,:,:].value) Silencing Warnings ------------------ If you don't like seeing warnings about potential slowdowns, etc., the following will catch and disable those warnings (see also http://docs.astropy.org/en/stable/warnings.html): .. code:: python import warnings from spectral_cube.utils import SpectralCubeWarning warnings.filterwarnings(action='ignore', category=SpectralCubeWarning, append=True) This will prevent any spectral-cube related warnings from being displayed. If you'd like more granular control over which warnings to ignore, look at spectral-cube/utils.py, which lists a wide range of warning types. ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1633017852.0 spectral-cube-0.6.0/docs/examples.rst0000644000175100001710000001475100000000000017116 0ustar00runnerdocker.. doctest-skip-all Examples ======== Note that these examples are not tested by continuous integration tests; it is possible they will become out-of-date over time. If you notice any mistakes or inconsistencies, please post them at https://github.com/radio-astro-tools/spectral-cube/issues. 1. From a cube with many lines, extract each line and create moment maps using the brightest line as a mask: .. code-block:: python import numpy as np from spectral_cube import SpectralCube from astropy import units as u # Read the FITS cube # And change the units back to Hz # (note that python doesn't care about the line breaks here) cube = (SpectralCube .read('my_multiline_file.fits') .with_spectral_unit(u.Hz)) # Lines to be analyzed (including brightest_line) my_line_list = [362.630304, 364.103249, 363.945894, 363.785397, 362.736048] * u.GHz my_line_widths = [150.0, 80.0, 80.0, 80.0, 80.0] * u.km/u.s my_line_names = ['HNC43','H2COJ54K4','H2COJ54K2','HC3N4039','H2COJ54K0'] # These are: # H2CO 5(4)-4(4) at 364.103249 GHz # H2CO 5(24)-4(23) at 363.945894 GHz # HC3N 40-39 at 363.785397 GHz # H2CO 5(05)-4(04) at 362.736048 GHz (actually a blend with HNC 4-3...) brightest_line = 362.630304*u.GHz # HNC 4-3 # What is the maximum width spanned by the galaxy (in velocity) width = 150*u.km/u.s # Velocity center vz = 258*u.km/u.s # Use the brightest line to identify the appropriate peak velocities, but ONLY # from a slab including +/- width: brightest_cube = (cube .with_spectral_unit(u.km/u.s, rest_value=brightest_line, velocity_convention='optical') .spectral_slab(vz-width, vz+width)) # velocity of the brightest pixel peak_velocity = brightest_cube.spectral_axis[brightest_cube.argmax(axis=0)] # make a spatial mask excluding pixels with no signal peak_amplitude = brightest_cube.max(axis=0) # Create a noise map from a line-free region. # found this range from inspection of a spectrum: # s = cube.max(axis=(1,2)) # s.quicklook() noisemap = cube.spectral_slab(362.603*u.GHz, 363.283*u.GHz).std(axis=0) spatial_mask = peak_amplitude > 3*noisemap # Now loop over EACH line, extracting moments etc. from the appropriate region: # we'll also apply a transition-dependent width (my_line_widths) here because # these fainter lines do not have peaks as far out as the bright line. for line_name,line_freq,line_width in zip(my_line_names,my_line_list,my_line_widths): subcube = cube.with_spectral_unit(u.km/u.s, rest_value=line_freq, velocity_convention='optical' ).spectral_slab(peak_velocity.min()-line_width, peak_velocity.max()+line_width) # this part makes a cube of velocities for masking work temp = subcube.spectral_axis velocities = np.tile(temp[:,None,None], subcube.shape[1:]) # `velocities` has the same shape as `subcube` # now we use the velocities from the brightest line to create a mask region # in the same velocity range but with different rest frequencies (different # lines) mask = np.abs(peak_velocity - velocities) < line_width # Mask on a pixel-by-pixel basis with a 1-sigma cut signal_mask = subcube > noisemap # the mask is a cube, the spatial mask is a 2d array, but in this case # numpy knows how to combine them properly # (signal_mask is a different type, so it can't be combined with the others # yet - https://github.com/radio-astro-tools/spectral-cube/issues/231) msubcube = subcube.with_mask(mask & spatial_mask).with_mask(signal_mask) # Then make & save the moment maps for moment in (0,1,2): mom = msubcube.moment(order=moment, axis=0) mom.hdu.writeto("moment{0}/{1}_{2}_moment{0}.fits".format(moment,target,line_name), clobber=True) 2. Use aplpy (in a slightly unsupported way) to make an RGB velocity movie .. code-block:: python import aplpy cube = SpectralCube.read('file.fits') prefix = 'HC3N' # chop out the NaN borders cmin = cube.minimal_subcube() # Create the WCS template F = aplpy.FITSFigure(cmin[0].hdu) # decide on the velocity range v1 = 30*u.km/u.s v2 = 60*u.km/u.s # determine pixel range p1 = cmin.closest_spectral_channel(v1) p2 = cmin.closest_spectral_channel(v2) for jj,ii in enumerate(range(p1, p2-1)): rgb = np.array([cmin[ii+2], cmin[ii+1], cmin[ii]]).T.swapaxes(0,1) # in case you manually set min/max rgb[rgb > max.value] = 1 rgb[rgb < min.value] = 0 # this is the unsupported little bit... F._ax1.clear() F._ax1.imshow((rgb-min.value)/(max-min).value, extent=F._extent) v1_ = int(np.round(cube.spectral_axis[ii].value)) v2_ = int(np.round(cube.spectral_axis[ii+2].value)) # then write out the files F.save('rgb/{2}_v{0}to{1}.png'.format(v1_, v2_, prefix)) # make a sorted version for use with ffmpeg os.remove('rgb/{0:04d}.png'.format(jj)) os.link('rgb/{2}_v{0}to{1}.png'.format(v1_, v2_, prefix), 'rgb/{0:04d}.png'.format(jj)) print("Done with frame {1}: channel {0}".format(ii, jj)) os.system('ffmpeg -y -i rgb/%04d.png -c:v libx264 -pix_fmt yuv420p -vf "scale=1024:768,setpts=10*PTS" -r 10 rgb/{0}_RGB_movie.mp4'.format(prefix)) 3. Extract a beam-weighted spectrum from a cube Each spectral cube has a 'beam' parameter if you have radio_beam installed. You can use that to create a beam kernel: .. code:: python kernel = cube.beam.as_kernel(cube.wcs.pixel_scale_matrix[1,1]) Find the pixel you want to integrate over form the image. e.g., .. code:: python x,y = 500, 150 Then, cut out an appropriate sub-cube and integrate over it .. code-block:: python kernsize = kernel.shape[0] subcube = cube[:,y-kernsize/2.:y+kernsize/2., x-kernsize/2.:x+kernsize/2.] # create a boolean mask at the 1% of peak level (you can modify this) mask = kernel.array > (0.01*kernel.array.max()) msubcube = subcube.with_mask(mask) # Then, take an appropriate beam weighting weighted_cube = msubcube * kernel.array # and *sum* (do not average!) over the weighted cube. beam_weighted_spectrum = weighted_cube.sum(axis=(1,2)) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1633017852.0 spectral-cube-0.6.0/docs/index.rst0000644000175100001710000000567600000000000016415 0ustar00runnerdockerSpectral Cube documentation =========================== The spectral-cube package provides an easy way to read, manipulate, analyze, and write data cubes with two positional dimensions and one spectral dimension, optionally with Stokes parameters. It provides the following main features: - A uniform interface to spectral cubes, robust to the wide range of conventions of axis order, spatial projections, and spectral units that exist in the wild. - Easy extraction of cube sub-regions using physical coordinates. - Ability to easily create, combine, and apply masks to datasets. - Basic summary statistic methods like moments and array aggregates. - Designed to work with datasets too large to load into memory. Quick start ----------- Here's a simple script demonstrating the spectral-cube package:: >>> import astropy.units as u >>> from astropy.utils import data >>> from spectral_cube import SpectralCube >>> fn = data.get_pkg_data_filename('tests/data/example_cube.fits', 'spectral_cube') >>> cube = SpectralCube.read(fn) >>> print(cube) SpectralCube with shape=(7, 4, 3) and unit=Jy / beam: n_x: 3 type_x: RA---ARC unit_x: deg range: 52.231466 deg: 52.231544 deg n_y: 4 type_y: DEC--ARC unit_y: deg range: 31.243639 deg: 31.243739 deg n_s: 7 type_s: VRAD unit_s: m / s range: 14322.821 m / s: 14944.909 m / s # extract the subcube between 98 and 100 GHz >>> slab = cube.spectral_slab(98 * u.GHz, 100 * u.GHz) # doctest: +SKIP # Ignore elements fainter than 1 Jy/beam >>> masked_slab = slab.with_mask(slab > 1 Jy/beam) # doctest: +SKIP # Compute the first moment and write to file >>> m1 = masked_slab.moment(order=1) # doctest: +SKIP >>> m1.write('moment_1.fits') # doctest: +SKIP Using spectral-cube ------------------- The package centers around the :class:`~spectral_cube.SpectralCube` class. In the following sections, we look at how to read data into this class, manipulate spectral cubes, extract moment maps or subsets of spectral cubes, and write spectral cubes to files. Getting started ^^^^^^^^^^^^^^^ .. toctree:: :maxdepth: 2 installing.rst creating_reading.rst accessing.rst Cube Analysis ^^^^^^^^^^^^^ .. toctree:: :maxdepth: 2 moments.rst errors.rst writing.rst beam_handling.rst masking.rst arithmetic.rst metadata.rst smoothing.rst Subsets ^^^^^^^ .. toctree:: :maxdepth: 2 manipulating.rst spectral_extraction.rst continuum_subtraction.rst Visualization ^^^^^^^^^^^^^ .. toctree:: :maxdepth: 2 quick_looks.rst visualization.rst Other Examples ^^^^^^^^^^^^^^ .. toctree:: :maxdepth: 2 examples.rst There is also an `astropy tutorial `__ on accessing and manipulating FITS cubes with spectral-cube. Advanced ^^^^^^^^ .. toctree:: :maxdepth: 1 dask.rst yt_example.rst big_data.rst api.rst ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1633017852.0 spectral-cube-0.6.0/docs/installing.rst0000644000175100001710000000647000000000000017443 0ustar00runnerdockerInstalling ``spectral-cube`` ============================ Requirements ------------ This package has the following dependencies: * `Python `_ Python 3.x * `Numpy `_ 1.8 or later * `Astropy `__ 4.0 or later * `radio_beam `_, used when reading in spectral cubes that use the BMAJ/BMIN convention for specifying the beam size. * `Bottleneck `_, optional (speeds up median and percentile operations on cubes with missing data) * `Regions `_ >=0.3dev, optional (Serialises/Deserialises DS9/CRTF region files and handles them. Used when extracting a subcube from region) * `scipy `_, optional (used for subcube creation) * `dask `_, used for the :class:`~spectral_cube.DaskSpectralCube` class * `zarr `_ and `fsspec `_, used for storing computations to disk when using the dask-enabled classes. * `six `_ * `casa-formats-io `_ Installation ------------ To install the latest stable release, you can type:: pip install spectral-cube or you can download the latest tar file from `PyPI `_ and install it using:: python setup.py install If you are using python2.7 (e.g., if you are using CASA version 5 or earlier), the latest spectral-cube version that is compatible is v0.4.4. Note that `Astropy v2.0 __` is the last version to support python2.7. Developer version ----------------- If you want to install the latest developer version of the spectral cube code, you can do so from the git repository:: git clone https://github.com/radio-astro-tools/spectral-cube.git cd spectral-cube python setup.py install You may need to add the ``--user`` option to the last line `if you do not have root access `_. You can also install the latest developer version in a single line with pip:: pip install git+https://github.com/radio-astro-tools/spectral-cube.git Installing into CASA -------------------- Installing packages in CASA is fairly straightforward. The process is described `here `_. In short, you can do the following: First, we need to make sure `pip `__ is installed. Start up CASA as normal, and type:: CASA <1>: from setuptools.command import easy_install CASA <2>: easy_install.main(['--user', 'pip']) Now, quit CASA and re-open it, then type the following to install ``spectral-cube``:: CASA <1>: import subprocess, sys CASA <2>: subprocess.check_call([sys.executable, '-m', 'pip', 'install', '--user', 'spectral-cube']) For CASA versions 5 and earlier, you need to install a specific version of spectral-cube because more recent versions of spectral-cube require python3.:: CASA <1>: import subprocess, sys CASA <2>: subprocess.check_call([sys.executable, '-m', 'pip', 'install', '--user', 'spectral-cube==v0.4.4']) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1633017852.0 spectral-cube-0.6.0/docs/manipulating.rst0000644000175100001710000001677000000000000017773 0ustar00runnerdockerManipulating cubes and extracting subcubes ========================================== Modifying the spectral axis --------------------------- As mentioned in :doc:`accessing`, it is straightforward to find the coordinates along the spectral axis using the :attr:`~spectral_cube.spectral_cube.BaseSpectralCube.spectral_axis` attribute:: >>> cube.spectral_axis # doctest: +SKIP [ -2.97198762e+03 -2.63992044e+03 -2.30785327e+03 -1.97578610e+03 -1.64371893e+03 -1.31165176e+03 -9.79584583e+02 -6.47517411e+02 ... 3.15629983e+04 3.18950655e+04 3.22271326e+04 3.25591998e+04 3.28912670e+04 3.32233342e+04] m / s The default units of a spectral axis are determined from the FITS header or WCS object used to initialize the cube, but it is also possible to change the spectral axis unit using :meth:`~spectral_cube.SpectralCube.with_spectral_unit`:: >>> from astropy import units as u >>> cube2 = cube.with_spectral_unit(u.km / u.s) # doctest: +SKIP >>> cube2.spectral_axis # doctest: +SKIP [ -2.97198762e+00 -2.63992044e+00 -2.30785327e+00 -1.97578610e+00 -1.64371893e+00 -1.31165176e+00 -9.79584583e-01 -6.47517411e-01 ... 3.02347296e+01 3.05667968e+01 3.08988639e+01 3.12309311e+01 3.15629983e+01 3.18950655e+01 3.22271326e+01 3.25591998e+01 3.28912670e+01 3.32233342e+01] km / s It is also possible to change from velocity to frequency for example, but this requires specifying the rest frequency or wavelength as well as a convention for the doppler shift calculation:: >>> cube3 = cube.with_spectral_unit(u.GHz, velocity_convention='radio', ... rest_value=200 * u.GHz) # doctest: +SKIP [ 220.40086492 220.40062079 220.40037667 220.40013254 220.39988841 220.39964429 220.39940016 220.39915604 220.39891191 220.39866778 ... 220.37645231 220.37620818 220.37596406 220.37571993 220.3754758 220.37523168 220.37498755 220.37474342 220.3744993 220.37425517] GHz The new cubes will then preserve the new spectral units when computing moments for example (see :doc:`moments`). Extracting a spectral slab -------------------------- Given a spectral cube, it is easy to extract a sub-cube covering only a subset of the original range in the spectral axis. To do this, you can use the :meth:`~spectral_cube.SpectralCube.spectral_slab` method. This method takes lower and upper bounds for the spectral axis, as well as an optional rest frequency, and returns a new :class:`~spectral_cube.SpectralCube` instance. The bounds can be specified as a frequency, wavelength, or a velocity but the units have to match the type of the spectral units in the cube (if they do not match, first use :meth:`~spectral_cube.SpectralCube.with_spectral_unit` to ensure that they are in the same units). The bounds should be given as Astropy :class:`Quantities ` as follows:: >>> from astropy import units as u >>> subcube = cube.spectral_slab(-50 * u.km / u.s, +50 * u.km / u.s) # doctest: +SKIP The resulting cube ``subcube`` (which is also a :class:`~spectral_cube.SpectralCube` instance) then contains all channels that overlap with the range -50 to 50 km/s relative to the rest frequency assumed by the world coordinates, or the rest frequency specified by a prior call to :meth:`~spectral_cube.SpectralCube.with_spectral_unit`. Extracting a sub-cube by indexing --------------------------------- It is also easy to extract a sub-cube from pixel coordinates using standard Numpy slicing notation:: >>> sub_cube = cube[:100, 10:50, 10:50] # doctest: +SKIP This returns a new :class:`~spectral_cube.SpectralCube` object with updated WCS information. .. _reg: Extracting a subcube from a DS9/CRTF region ------------------------------------------- You can use `DS9 `_/`CRTF `_ regions to extract subcubes. The minimal enclosing subcube will be extracted with a two-dimensional mask corresponding to the DS9/CRTF region. `Regions `_ is required for region parsing. CRTF regions may also contain spectral cutout information. This example shows extraction of a subcube from a ds9 region file ``file.reg``. `~regions.read_ds9` parses the ds9 file and converts it to a list of `~regions.Region` objects:: >>> import regions # doctest: +SKIP >>> region_list = regions.read_ds9('file.reg') # doctest: +SKIP >>> sub_cube = cube.subcube_from_regions(region_list) # doctest: +SKIP This one shows extraction of a subcube from a CRTF region file ``file.crtf``, parsed using `~regions.read_crtf`:: >>> import regions # doctest: +SKIP >>> region_list = regions.read_crtf('file.reg') # doctest: +SKIP >>> sub_cube = cube.subcube_from_regions(region_list) # doctest: +SKIP If you want to loop over individual regions with a single region file, you need to convert the individual regions to lists of that region:: >>> region_list = regions.read_ds9('file.reg') #doctest: +SKIP >>> for region in region_list: #doctest: +SKIP >>> sub_cube = cube.subcube_from_regions([region]) #doctest: +SKIP You can also directly use a ds9 region string. This example extracts a 0.1 degree circle around the Galactic Center:: >>> region_str = "galactic; circle(0, 0, 0.1)" # doctest: +SKIP >>> sub_cube = cube.subcube_from_ds9region(region_str) # doctest: +SKIP Similarly, you can also use a CRTF region string:: >>> region_str = "circle[[0deg, 0deg], 0.1deg], coord=galactic, range=[150km/s, 300km/s]" # doctest: +SKIP >>> sub_cube = cube.subcube_from_crtfregion(region_str) # doctest: +SKIP CRTF regions that specify a subset in the spectral dimension can be used to produce full 3D cutouts. The ``meta`` attribute of a `regions.Region` object contains the spectral information for that region in the three special keywords ``range``, ``restfreq``, and ``veltype``:: >>> import regions # doctest: +SKIP >>> from astropy import units as u >>> regpix = regions.RectanglePixelRegion(regions.PixCoord(0.5, 1), width=4, height=2) # doctest: +SKIP >>> regpix.meta['range'] = [150 * u.km/u.s, 300 * u.km/u.s] # spectral range # doctest: +SKIP >>> regpix.meta['restfreq'] = [100 * u.GHz] # rest frequency # doctest: +SKIP >>> regpix.meta['veltype'] = 'OPTICAL' # velocity convention # doctest: +SKIP >>> subcube = cube.subcube_from_regions([regpix]) # doctest: +SKIP If ``range`` is specified, but the other two keywords are not, the code will likely crash. Extract the minimal valid subcube --------------------------------- If you have a mask that masks out some of the cube edges, such that the resulting sub-cube might be smaller in memory, it can be useful to extract the minimal enclosing sub-cube:: >>> sub_cube = cube.minimal_subcube() # doctest: +SKIP You can also shrink any cube by this mechanism:: >>> sub_cube = cube.with_mask(smaller_region).minimal_subcube() # doctest: +SKIP Extract a spatial and spectral subcube -------------------------------------- There is a generic subcube function that allows slices in the spatial and spectral axes simultaneously, as long as the spatial axes are aligned with the pixel axes. An arbitrary example looks like this:: >>> sub_cube = cube.subcube(xlo=5*u.deg, xhi=6*u.deg, # doctest: +SKIP ylo=2*u.deg, yhi=2.1*u.deg, # doctest: +SKIP zlo=50*u.GHz, zhi=51*u.GHz) # doctest: +SKIP ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1633017852.0 spectral-cube-0.6.0/docs/masking.rst0000644000175100001710000002100300000000000016715 0ustar00runnerdockerMasking ======= Getting started --------------- In addition to supporting the representation of data and associated WCS, it is also possible to attach a boolean mask to the :class:`~spectral_cube.SpectralCube` class. Masks can take various forms, but one of the more common ones is a cube with the same dimensions as the data, and that contains e.g. the boolean value `True` where data should be used, and the value `False` when the data should be ignored (though it is also possible to flip the convention around; see :ref:`mask_inclusion_exclusion`). To create a boolean mask from a boolean array ``mask_array``, you can for example use:: >>> from astropy import units as u >>> from spectral_cube import BooleanArrayMask >>> mask = BooleanArrayMask(mask=mask_array, wcs=cube.wcs) # doctest: +SKIP .. note:: Currently, the mask convention is opposite of what is defined for Numpy masked array and Astropy ``Table``. Using a pure boolean array may not always be the most efficient solution because it may require a large amount of memory. You can also create a mask using simple conditions directly on the cube values themselves, for example:: >>> include_mask = cube > 1.3*u.K # doctest: +SKIP This is more efficient because the condition is actually evaluated on-the-fly as needed. Note that units equivalent to the cube's units must be used. Masks can be combined using standard boolean comparison operators:: >>> new_mask = (cube > 1.3*u.K) & (cube < 100.*u.K) # doctest: +SKIP The available operators are ``&`` (and), ``|`` (or), and ``~`` (not). To apply a new mask to a :class:`~spectral_cube.SpectralCube` class, use the :meth:`~spectral_cube.SpectralCube.with_mask` method, which can take a mask and combine it with any pre-existing mask:: >>> cube2 = cube.with_mask(new_mask) # doctest: +SKIP In the above example, ``cube2`` contains a mask that is the ``&`` combination of ``new_mask`` with the existing mask on ``cube``. The ``cube2`` object contains a view to the same data as ``cube``, so no data is copied during this operation. Boolean arrays can also be used as input to :meth:`~spectral_cube.SpectralCube.with_mask`, assuming the shape of the mask and the data match:: >>> cube2 = cube.with_mask(boolean_array) # doctest: +SKIP Any boolean area that can be `broadcast `_ to the cube shape can be used as a boolean array mask. Accessing masked data --------------------- As mention in :doc:`accessing`, the raw and unmasked data can be accessed with the `spectral_cube.spectral_cube.BaseSpectralCube.unmasked_data` attribute. You can access the masked data using ``filled_data``. This array is a copy of the original data with any masked value replaced by a fill value (which is ``np.nan`` by default but can be changed using the ``fill_value`` option in the :class:`~spectral_cube.SpectralCube` initializer). The 'filled' data is accessed with e.g.:: >>> slice_filled = cube.filled_data[0,:,:] # doctest: +SKIP Note that accessing the filled data should still be efficient because the data are loaded and filled only once you access the actual data values, so this should still be efficient for large datasets. If you are only interested in getting a flat (i.e. 1-d) array of all the non-masked values, you can also make use of the :meth:`~spectral_cube.SpectralCube.flattened` method:: >>> flat_array = cube.flattened() # doctest: +SKIP Fill values ----------- When accessing the data (see :doc:`accessing`), the mask may be applied to the data and the masked values replaced by a *fill* value. This fill value can be set using the ``fill_value`` initializer in :class:`~spectral_cube.SpectralCube`, and is set to ``np.nan`` by default. To change the fill value on a cube, you can make use of the :meth:`~spectral_cube.SpectralCube.with_fill_value` method:: >>> cube2 = cube.with_fill_value(0.) # doctest: +SKIP This returns a new :class:`~spectral_cube.SpectralCube` instance that contains a view to the same data in ``cube`` (so no data are copied). .. _mask_inclusion_exclusion: Inclusion and Exclusion ----------------------- The term "mask" is often used to refer both to the act of exluding and including pixels from analysis. To be explicit about how they behave, all mask objects have an :meth:`~spectral_cube.masks.MaskBase.include` method that returns a boolean array. `True` values in this array indicate that the pixel is included/valid, and not filtered/replaced in any way. Conversely, `True` values in the output from :meth:`~spectral_cube.masks.MaskBase.exclude` indicate the pixel is excluded/invalid, and will be filled/filtered. The inclusion/exclusion behavior of any mask can be inverted via:: >>> mask_inverse = ~mask # doctest: +SKIP Advanced masking ---------------- Masks based on simple functions that operate on the initial data can be defined using the :class:`~spectral_cube.LazyMask` class. The motivation behind the :class:`~spectral_cube.LazyMask` class is that it is essentially equivalent to a boolean array, but the boolean values are computed on-the-fly as needed, meaning that the whole boolean array does not ever necessarily need to be computed or stored in memory, making it ideal for very large datasets. The function passed to :class:`~spectral_cube.LazyMask` should be a simple function taking one argument - the dataset itself:: >>> from spectral_cube import LazyMask >>> cube = read(...) # doctest: +SKIP >>> LazyMask(np.isfinite, cube=cube) # doctest: +SKIP or for example:: >>> def threshold(data): ... return data > 3. >>> LazyMask(threshold, cube=cube) # doctest: +SKIP As shown in `Getting Started`_, :class:`~spectral_cube.LazyMask` instances can also be defined directly by specifying conditions on :class:`~spectral_cube.SpectralCube` objects: >>> cube > 5*u.K # doctest: +SKIP LazyComparisonMask(...) .. TODO: add example for FunctionalMask Outputting masks ---------------- The attached mask to the given :class:`~spectral_cube.SpectralCube` class can be converted into a CASA image using :func:`~spectral_cube.io.casa_masks.make_casa_mask`: >>> from spectral_cube.io.casa_masks import make_casa_mask >>> make_casa_mask(cube, 'casa_mask.image', add_stokes=False) # doctest: +SKIP Optionally, a redundant Stokes axis can be added to match the original CASA image. .. Masks may also be appended to an existing CASA image:: .. >>> make_casa_mask(cube, 'casa_mask.image', append_to_img=True, .. img='casa.image') .. note:: Outputting to CASA masks requires that `spectral_cube` be run from a CASA python session. Masking cubes with other cubes ------------------------------ A common use case is to mask a cube based on another cube in the same coordinates. For example, you want to create a mask of 13CO based on the brightness of 12CO. This can be done straightforwardly if they are on an identical grid:: >>> mask_12co = cube12co > 0.5*u.Jy # doctest: +SKIP >>> masked_cube13co = cube13co.with_mask(mask_12co) # doctest: +SKIP If you see errors such as ``WCS does not match mask WCS``, but you're confident that your two cube are on the same grid, you should have a look at the ``cube.wcs`` attribute and see if there are subtle differences in the world coordinate parameters. These frequently occur when converting from frequency to velocity as there is inadequate precision in the rest frequency. For example, these two axes are *nearly* identical, but not perfectly so:: Number of WCS axes: 3 CTYPE : 'RA---SIN' 'DEC--SIN' 'VRAD' CRVAL : 269.08866286689999 -21.956244813729999 -3000.000559989533 CRPIX : 161.0 161.0 1.0 PC1_1 PC1_2 PC1_3 : 1.0 0.0 0.0 PC2_1 PC2_2 PC2_3 : 0.0 1.0 0.0 PC3_1 PC3_2 PC3_3 : 0.0 0.0 1.0 CDELT : -1.3888888888889999e-05 1.3888888888889999e-05 299.99999994273281 NAXIS : 0 0 Number of WCS axes: 3 CTYPE : 'RA---SIN' 'DEC--SIN' 'VRAD' CRVAL : 269.08866286689999 -21.956244813729999 -3000.0000242346514 CRPIX : 161.0 161.0 1.0 PC1_1 PC1_2 PC1_3 : 1.0 0.0 0.0 PC2_1 PC2_2 PC2_3 : 0.0 1.0 0.0 PC3_1 PC3_2 PC3_3 : 0.0 0.0 1.0 CDELT : -1.3888888888889999e-05 1.3888888888889999e-05 300.00000001056611 NAXIS : 0 0 In order to compose masks from these, we need to set the ``wcs_tolerance`` parameter:: >>> masked_cube13co = cube13co.with_mask(mask_12co, wcs_tolerance=1e-3) # doctest: +SKIP which in this case will check equality at the 1e-3 level, which truncates the 3rd CRVAL to the point of equality before comparing the values. ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1633017852.0 spectral-cube-0.6.0/docs/metadata.rst0000644000175100001710000000174700000000000017061 0ustar00runnerdockerMetadata and Headers ==================== The metadata of both :class:`~spectral_cube.SpectralCube` s and :class:`~spectral_cube.lower_dimensional_structures.LowerDimensionalObject` s is stored in their ``.meta`` attribute, which is a dictionary of metadata. When writing these objects to file, or exporting them as FITS :class:`HDUs `, the metadata will be written to the FITS header. If the metadata matches the FITS standard, it will just be directly written, with the dictionary keys replaced with upper-case versions. If the keys are longer than 8 characters, a FITS ``COMMENT`` entry will be entered with the data formatted as ``{key}={value}``. The world coordinate system (WCS) metadata will be handled automatically, as will the beam parameter metadata. The automation implies that WCS keywords and beam keywords cannot be manipulated directly by changing the ``meta`` dictionary; they must be manipulated through other means (e.g., :doc:`manipulating`). ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1633017852.0 spectral-cube-0.6.0/docs/moments.rst0000644000175100001710000001535700000000000016765 0ustar00runnerdockerMoment maps and statistics ========================== Moment maps ----------- Producing moment maps from a :class:`~spectral_cube.SpectralCube` instance is straightforward:: >>> moment_0 = cube.moment(order=0) # doctest: +SKIP >>> moment_1 = cube.moment(order=1) # doctest: +SKIP >>> moment_2 = cube.moment(order=2) # doctest: +SKIP By default, moments are computed along the spectral dimension, but it is also possible to pass the ``axis`` argument to compute them along a different axis:: >>> moment_0_along_x = cube.moment(order=0, axis=2) # doctest: +SKIP .. note:: These follow the mathematical definition of moments, so the second moment is computed as the variance. For the actual formulas used for the moments, please see :class:`~spectral_cube.SpectralCube.moment`. For linewidth maps, see the `Linewidth maps`_ section. You may also want to convert the unit of the datacube into a velocity one before you can obtain a genuine velocity map via a 1st moment map. So first it will be necessary to apply the :class:`~spectral_cube.SpectralCube.with_spectral_unit` method from this package with the proper attribute settings:: >>> nii_cube = cube.with_spectral_unit(u.km/u.s, velocity_convention='optical', rest_value=6584*u.AA) # doctest: +SKIP Note that the ``rest_value`` in the above code refers to the wavelength of the targeted line in the 1D spectrum corresponding to the 3rd dimension. Also, since not all velocity values are relevant, next we will use the :class:`~spectral_cube.SpectralCube.spectral_slab` method to slice out the chunk of the cube that actually contains the line:: >>> nii_cube = cube.with_spectral_unit(u.km/u.s, velocity_convention='optical', rest_value=6584*u.AA) # doctest: +SKIP >>> nii_subcube = nii_cube.spectral_slab(-60*u.km/u.s,-20*u.km/u.s) # doctest: +SKIP Finally, we can now generate the 1st moment map containing the expected velocity structure:: >>> moment_1 = nii_subcube.moment(order=1) # doctest: +SKIP The moment maps returned are :class:`~spectral_cube.lower_dimensional_structures.Projection` instances, which act like :class:`~astropy.units.Quantity` objects, and also have convenience methods for writing to a file:: >>> moment_0.write('moment0.fits') # doctest: +SKIP >>> moment_1.write('moment1.fits') # doctest: +SKIP and converting the data and WCS to a FITS HDU:: >>> moment_0.hdu # doctest: +SKIP The conversion to HDU objects makes it very easy to use the moment map with plotting packages such as `aplpy `_:: >>> import aplpy # doctest: +SKIP >>> f = aplpy.FITSFigure(moment_0.hdu) # doctest: +SKIP >>> f.show_colorscale() # doctest: +SKIP >>> f.save('moment_0.png') # doctest: +SKIP There is a shortcut for the above, if you have aplpy_ installed:: >>> moment_0.quicklook('moment_0.png') will create the quicklook grayscale image and save it to a png all in one go. Moment map equations ^^^^^^^^^^^^^^^^^^^^ The moments are defined below, using :math:`v` for the spectral (velocity, frequency, wavelength, or energy) axis and :math:`I_v` as the intensity, or otherwise measured flux, value in a given spectral channel. The equation for the 0th moment is: .. math:: M_0 = \int I_v dv The equation for the 1st moment is: .. math:: M_1 = \frac{\int v I_v dv}{\int I_v dv} = \frac{\int v I_v dv}{M_0} Higher-order moments (:math:`N\geq2`) are defined as: .. math:: M_N = \frac{\int I_v (v - M_1)^N dv}{M_0} Descriptions for the three most common moments used are: * 0th moment - the integrated intensity over the spectral line. Units are cube unit times spectral axis unit (e.g., K km/s). * 1st moment - the the intensity-weighted velocity of the spectral line. The unit is the same as the spectral axis unit (e.g., km/s) * 2nd moment - the velocity dispersion or the width of the spectral line. The unit is the spectral axis unit squared (e.g., :math:`km^2/s^2`). To obtain measurements of the linewidth in spectral axis units, see `Linewidth maps`_ below Linewidth maps -------------- Line width maps based on the 2nd moment maps, as defined above, can be made with either of these two commands:: >>> sigma_map = cube.linewidth_sigma() # doctest: +SKIP >>> fwhm_map = cube.linewidth_fwhm() # doctest: +SKIP ``~spectral_cube.SpectralCube.linewidth_sigma`` computes a sigma linewidth map along the spectral axis, where sigma is the width of a Gaussian, while ``~spectral_cube.SpectralCube.linewidth_fwhm`` computes a FWHM linewidth map along the same spectral axis. The linewidth maps are related to the second moment by .. math:: \sigma = \sqrt{M_2} \\ FWHM = \sigma \sqrt{8 ln{2}} These functions return :class:`~spectral_cube.lower_dimensional_structures.Projection` instances as for the `Moment maps`_. Additional 2D maps ------------------ Other common 2D views of a spectral cube include the maximum/minimum along a dimension and the location of the maximum or the minimum along that dimension. To produce a 2D map of the maximum along a cube dimension, use `~spectral_cube.SpectralCube.max` >>> max_map = cube.max(axis=0) # doctest: +SKIP Along the spectral axis, this will return the peak intensity map. Similarly we can use `~spectral_cube.SpectralCube.min` to make a 2D minimum map: >>> min_map = cube.min(axis=0) # doctest: +SKIP The `~spectral_cube.SpectralCube.argmax` and `~spectral_cube.SpectralCube.argmin` operations will return the pixel positions of the max/min along that axis: >>> argmax_map = cube.argmax(axis=0) # doctest: +SKIP >>> argmin_map = cube.argmin(axis=0) # doctest: +SKIP These maps are useful for identifying where signal is located within the spectral cube, however, it is more useful to return the WCS values of those pixels for comparison with other data sets or for modeling. The `~spectral_cube.SpectralCube.argmax_world` and `~spectral_cube.SpectralCube.argmin_world` should be used in this case: >>> world_argmax_map = cube.argmax_world(axis=0) # doctest: +SKIP >>> world_argmin_map = cube.argmin_world(axis=0) # doctest: +SKIP Along the spectral axis, `~spectral_cube.SpectralCube.argmax_world` creates the often used "velocity at peak intensity," which may also be called the "peak velocity." .. note:: `cube.argmax_world` and `cube.argmin_world` are currently only defined along the spectral axis, as the example above shows. This is because `argmax_world` and `argmin_world` operate along the pixel axes, but they are not independent in WCS coordinates due to the curvature of the sky. ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1633017852.0 spectral-cube-0.6.0/docs/nitpick-exceptions0000644000175100001710000000240000000000000020275 0ustar00runnerdockerpy:class spectral_cube.spectral_cube.BaseSpectralCube py:obj radio_beam.Beam py:obj Beam py:obj astroquery.splatalogue.Splatalogue py:class spectral_cube.base_class.BaseNDClass py:class spectral_cube.base_class.SpectralAxisMixinClass py:class spectral_cube.base_class.SpatialCoordMixinClass py:class spectral_cube.base_class.MaskableArrayMixinClass py:class spectral_cube.base_class.MultiBeamMixinClass py:class spectral_cube.base_class.BeamMixinClass py:class spectral_cube.base_class.HeaderMixinClass py:class spectral_cube.lower_dimensional_structures.BaseOneDSpectrum py:obj spectral_cube.lower_dimensional_structures.OneDSpectrum.quicklook # yt references py:obj yt.surface.export_sketchfab py:obj yt.show_colormaps # aplpy reference py:obj aplpy.FITSFigure # numpy inherited docstrings py:obj dtype py:obj a py:obj a.size == 1 py:obj n py:obj ndarray py:obj args py:obj Quantity py:obj conjugate py:obj numpy.conjugate py:obj numpy.lib.stride_tricks.as_strided py:obj x py:obj i py:obj j py:obj axis1 py:obj axis2 py:obj numpy.ctypeslib py:obj ndarray_subclass py:obj ndarray.T py:obj order py:obj refcheck py:obj val py:obj offset py:obj lexsort py:obj a.transpose() py:obj new_order py:obj inplace py:obj subok py:obj ndarray.setflags py:obj ndarray.flat py:obj arr_t ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1633017852.0 spectral-cube-0.6.0/docs/quick_looks.rst0000644000175100001710000000116700000000000017620 0ustar00runnerdockerQuick Looks =========== Once you've loaded a cube, you inevitably will want to look at it in various ways. Slices in any direction have ``quicklook`` methods: >>> cube[50,:,:].quicklook() # show an image # doctest: +SKIP >>> cube[:, 50, 50].quicklook() # plot a spectrum # doctest: +SKIP The same can be done with moments: >>> cube.moment0(axis=0).quicklook() # doctest: +SKIP PVSlicer -------- The `pvextractor `_ package comes with a GUI that has a simple matplotlib image viewer. To activate it for your cube: >>> cube.to_pvextractor() # doctest: +SKIP ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1633017852.0 spectral-cube-0.6.0/docs/rtd-pip-requirements0000644000175100001710000000032700000000000020563 0ustar00runnerdocker-e git+http://github.com/astropy/astropy-helpers.git#egg=astropy_helpers numpy Cython -e git+http://github.com/astropy/astropy.git#egg=astropy -e git+http://github.com/radio-astro-tools/radio-beam.git#egg=radio_beam././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1633017852.0 spectral-cube-0.6.0/docs/smoothing.rst0000644000175100001710000001570500000000000017307 0ustar00runnerdockerSmoothing --------- There are two types of smoothing routine available in ``spectral_cube``: spatial and spectral. Spatial Smoothing ================= The `~spectral_cube.SpectralCube.convolve_to` method will convolve each plane of the cube to a common resolution, assuming the cube's resolution is known in advanced and stored in the cube's ``beam`` or ``beams`` attribute. A simple example:: import radio_beam from spectral_cube import SpectralCube from astropy import units as u cube = SpectralCube.read('file.fits') beam = radio_beam.Beam(major=1*u.arcsec, minor=1*u.arcsec, pa=0*u.deg) new_cube = cube.convolve_to(beam) Note that the :meth:`~spectral_cube.SpectralCube.convolve_to` method will work for both :class:`~spectral_cube.VaryingResolutionSpectralCube` instances and single-resolution :class:`~spectral_cube.SpectralCube` instances, but for a :class:`~spectral_cube.VaryingResolutionSpectralCube`, the convolution kernel will be different for each slice. Common Beam selection ^^^^^^^^^^^^^^^^^^^^^ You may want to convolve your cube to the smallest beam that is still larger than all contained beams. To do this, you can use the `~radio_beam.Beams.common_beam` tool. For example:: common_beam = cube.beams.common_beam() new_cube = cube.convolve_to(common_beam) Sometimes, you'll encounter the error "Could not find common beam to deconvolve all beams." This is a real issue, as the algorithms we have in hand so far do not always converge on a common containing beam. There are two ways to get the algorithm to converge to a valid common beam: 1. **Changing the tolerance.** - You can try to change the tolerance used in the `~radio_beam.commonbeam.getMinVolEllipse` code by passing ``tolerance=1e-5`` to the common beam function:: cube.beams.common_beam(tolerance=1e-5) Convergence may be met by either increasing or decreasing the tolerance; it depends on having the algorithm not step within the minimum enclosing ellipse, leading to the error. Note that decreasing the tolerance by an order of magnitude will require an order of magnitude more iterations for the algorithm to converge and will take longer to run. 2. **Changing epsilon** - A second parameter ``epsilon`` controls the fraction to overestimate the beam size, ensuring that solutions that are marginally smaller than the common beam will not be found by the algorithm:: cube.beams.common_beam(epsilon=1e-3) The default value of ``epsilon=1e-3`` will sample points 0.1% larger than the edge of each beam in the set. Increasing ``epsilon`` ensures that a valid common beam can be found, avoiding the tolerance issue, but will result in overestimating the common beam area. For most radio data sets, where the beam is oversampled by :math:`\sim 3 \mbox{-5}` pixels, moderate increases in ``epsilon`` will increase the common beam area far less than a pixel area, making the overestimation negligible. We recommend testing different values of tolerance to find convergence, and if the error persists, to then slowly increase epsilon until a valid common beam is found. More information can be found in the `radio-beam documentation `_. Alternative approach to spatial smoothing ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ There is an alternative way to spatially smooth a data cube, which is using the :meth:`~spectral_cube.SpectralCube.spatial_smooth` method. This is an example of how you can do this:: from spectral_cube import SpectralCube from astropy.io import fits from astropy.convolution import Gaussian2DKernel cube = SpectralCube.read('/some_path/some_file.fits') kernel = Gaussian2DKernel(x_stddev=1) new_cube = cube.spatial_smooth(kernel) new_cube.write('/some_path/some_other_file.fits') ``x_stddev`` specifies the width of the `Gaussian kernel `_. Any `astropy convolution `_ is acceptable. Spectral Smoothing ================== Only :class:`~spectral_cube.SpectralCube` instances with a consistent beam can be spectrally smoothed, so if you have a :class:`~spectral_cube.VaryingResolutionSpectralCube`, you must convolve each slice in it to a common resolution before spectrally smoothing. :meth:`~spectral_cube.SpectralCube.spectral_smooth` will apply a convolution kernel to each spectrum in turn. As of July 2016, a parallelized version is partly written but incomplete. Example:: import radio_beam from spectral_cube import SpectralCube from astropy import units as u from astropy.convolution import Gaussian1DKernel cube = SpectralCube.read('file.fits') kernel = Gaussian1DKernel(2.5) new_cube = cube.spectral_smooth(kernel) This can be useful if you want to interpolate onto a coarser grid but maintain Nyquist sampling. You can then use the `~spectral_cube.SpectralCube.spectral_interpolate` method to regrid your smoothed spectrum onto a new grid. Say, for example, you have a cube with 0.5 km/s resolution, but you want to resample it onto a 2 km/s grid. You might then choose to smooth by a factor of 4, then downsample by the same factor:: # cube.spectral_axis is np.arange(0,10,0.5) for this example new_axis = np.arange(0,10,2)*u.km/u.s fwhm_factor = np.sqrt(8*np.log(2)) smcube = cube.spectral_smooth(Gaussian1DKernel(4/fwhm_factor)) interp_Cube = smcube.spectral_interpolate(new_axis, suppress_smooth_warning=True) We include the ``suppress_smooth_warning`` override because there is no way for ``SpectralCube`` to know if you've done the appropriate smoothing (i.e., making sure that your new grid nyquist samples the data) prior to the interpolation step. If you don't specify this, it will still work, but you'll be warned that you should preserve Nyquist sampling. If you have a cube with 0.1 km/s resolution (where we assume resolution corresponds to the fwhm of a gaussian), and you want to smooth it to 0.25 km/s resolution, you can smooth the cube with a Gaussian Kernel that has a width of (0.25^2 - 0.1^2)^0.5 = 0.229 km/s. For simplicity, it can be done in the unit of pixel. In our example, each channel is 0.1 km/s wide:: import numpy as np from astropy import units as u from spectral_cube import SpectralCube from astropy.convolution import Gaussian1DKernel cube = SpectralCube.read('file.fits') fwhm_factor = np.sqrt(8*np.log(2)) current_resolution = 0.1 * u.km/u.s target_resolution = 0.25 * u.km/u.s pixel_scale = 0.1 * u.km/u.s gaussian_width = ((target_resolution**2 - current_resolution**2)**0.5 / pixel_scale / fwhm_factor) kernel = Gaussian1DKernel(gaussian_width.value) new_cube = cube.spectral_smooth(kernel) new_cube.write('newfile.fits') `gaussian_width` is in pixel units but is defined as a unitless `~astropy.units.Quantity`. By using `gaussian_width.value`, we convert the pixel width into a float. ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1633017852.0 spectral-cube-0.6.0/docs/spectral_extraction.rst0000644000175100001710000000631300000000000021350 0ustar00runnerdockerSpectral Extraction =================== A commonly required operation is extracting a spectrum from a part of a cube. The simplest way to get a spectrum from the cube is simply to slice it along a single pixel:: >>> spectrum = cube[:, 50, 60] # doctest: +SKIP Slicing along the first dimension will create a `~spectral_cube.lower_dimensional_structures.OneDSpectrum` object, which has a few useful capabilities. Aperture Extraction ------------------- Going one level further, you can extract a spectrum from an aperture. We'll start with the simplest variant: a square aperture. The cube can be sliced in pixel coordinates to produce a sub-cube which we then average spatially to get the spectrum:: >>> subcube = cube[:, 50:53, 60:63] # doctest: +SKIP >>> spectrum = subcube.mean(axis=(1,2)) # doctest: +SKIP The spectrum can be obtained using any mathematical operation, such as the ``max`` or ``std``, e.g., if you wanted to obtain the noise spectrum. Slightly more sophisticated aperture extraction ----------------------------------------------- To get the flux in a circular aperture, you need to mask the data. In this example, we don't use any external libraries, but show how to create a circular mask from scratch and apply it to the data.:: >>> yy, xx = np.indices([5,5], dtype='float') # doctest: +SKIP >>> radius = ((yy-2)**2 + (xx-2)**2)**0.5 # doctest: +SKIP >>> mask = radius <= 2 # doctest: +SKIP >>> subcube = cube[:, 50:55, 60:65] # doctest: +SKIP >>> maskedsubcube = subcube.with_mask(mask) # doctest: +SKIP >>> spectrum = maskedsubcube.mean(axis=(1,2)) # doctest: +SKIP Aperture and spectral extraction using regions ---------------------------------------------- Spectral-cube supports ds9 and crtf regions, so you can use them to create a mask. The ds9/crtf region support relies on `regions `_, which supports most shapes in ds9 and crtf, so you are not limited to circular apertures. In this example, we'll extract a subcube from ds9 region string using :meth:`~spectral_cube.spectral_cube.BaseSpectralCube.subcube_from_ds9region`:: >>> ds9_str = 'fk5; circle(19:23:43.907, +14:30:34.66, 3")' # doctest: +SKIP >>> subcube = cube.subcube_from_ds9region(ds9_str) # doctest: +SKIP >>> spectrum = subcube.mean(axis=(1, 2)) # doctest: +SKIP Similarly, one can extract a subcube from a crtf region string using :meth:`~spectral_cube.spectral_cube.BaseSpectralCube.subcube_from_crtfregion`:: >>> crtf_str = 'circle[[19:23:43.907, +14:30:34.66], 3"], coord=fk5, range=[150km/s, 300km/s]' # doctest: +SKIP >>> subcube = cube.subcube_from_crtfregion(crtf_str) # doctest: +SKIP >>> spectrum = subcube.mean(axis=(1, 2)) # doctest: +SKIP You can also use a _list_ of `~regions.Region` objects to extract a subcube using :meth:`~spectral_cube.spectral_cube.BaseSpectralCube.subcube_from_regions`:: >>> import regions # doctest: +SKIP >>> regpix = regions.RectanglePixelRegion(regions.PixCoord(0.5, 1), width=4, height=2) # doctest: +SKIP >>> subcube = cube.subcube_from_regions([regpix]) # doctest: +SKIP >>> spectrum = subcube.mean(axis=(1, 2)) # doctest: +SKIP To learn more, go to :ref:`reg`. ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1633017852.0 spectral-cube-0.6.0/docs/stokes.rst0000644000175100001710000000040200000000000016574 0ustar00runnerdocker:orphan: Stokes components ================= We plan to implement the `~spectral_cube.StokesSpectralCube` class and will update the documentation once this class is ready to use. .. TODO: first we need to make sure the StokesSpectralCube class is working.././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1633017852.0 spectral-cube-0.6.0/docs/visualization.rst0000644000175100001710000000342700000000000020177 0ustar00runnerdockerVisualization ============= Spectral-cube is not primarily a visualization package, but it has several tools for visualizing subsets of the data. All lower-dimensional subsets, `~spectral_cube.lower_dimensional_structures.OneDSpectrum`, and `~spectral_cube.lower_dimensional_structures.Projection`, have their own ``quicklook`` methods (`~spectral_cube.lower_dimensional_structures.OneDSpectrum.quicklook` and `~spectral_cube.lower_dimensional_structures.Projection.quicklook`, respectively). These methods will plot the data with somewhat properly labeled axes. The two-dimensional viewers default to using `aplpy `_. Because of quirks of how aplpy sets up its plotting window, these methods will create their own figures. If ``use_aplpy`` is set to ``False``, and similarly if you use the ``OneDSpectrum`` quicklook, the data will be overplotted in the latest used plot window. In principle, one can also simply plot the data. For example, if you have a cube, you could do:: >>> plt.plot(cube[:,0,0]) # doctest: +SKIP to plot a spectrum sliced out of the cube or:: >>> plt.imshow(cube[0,:,:]) # doctest: +SKIP to plot an image slice. .. warning:: There are known incompatibilities with the above plotting approach: matplotlib versions ``<2.1`` will crash, and you will have to clear the plot window to reset it. Other Visualization Tools ========================= To visualize the cubes directly, you can use some of the other tools we provide for pushing cube data into external viewers. See :doc:`yt_example` for using yt as a visualization tool. The `spectral_cube.SpectralCube.to_glue` and `spectral_cube.SpectralCube.to_ds9` methods will send the whole cube to glue and ds9. This approach generally requires loading the whole cube into memory. ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1633017852.0 spectral-cube-0.6.0/docs/writing.rst0000644000175100001710000000040000000000000016745 0ustar00runnerdockerWriting spectral cubes ====================== You can write out a :class:`~spectral_cube.SpectralCube` instance by making use of the :meth:`~spectral_cube.SpectralCube.write` method:: >>> cube.write('new_cube.fits', format='fits') # doctest: +SKIP ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1633017852.0 spectral-cube-0.6.0/docs/yt_example.rst0000644000175100001710000001525000000000000017442 0ustar00runnerdockerVisualizing spectral cubes with yt ================================== Extracting yt objects --------------------- The :class:`~spectral_cube.SpectralCube` class includes a :meth:`~spectral_cube.SpectralCube.to_yt` method that makes is easy to return an object that can be used by `yt `_ to make volume renderings or other visualizations of the data. One common issue with volume rendering of spectral cubes is that you may not want pixels along the spectral axis to be given the same '3-d' size as positional pixels, so the :meth:`~spectral_cube.SpectralCube.to_yt` method includes a ``spectral_factor`` argument that can be used to compress or expand the spectral axis. The :meth:`~spectral_cube.SpectralCube.to_yt` method is used as follows:: >>> ytcube = cube.to_yt(spectral_factor=0.5) # doctest: +SKIP >>> ds = ytcube.dataset # doctest: +SKIP .. WARNING:: The API change in https://github.com/radio-astro-tools/spectral-cube/pull/129 affects the interpretation of the 0-pixel. There may be a 1-pixel offset between the yt cube and the SpectralCube The ``ds`` object is then a yt object that can be used for rendering! By default the dataset is defined in pixel coordinates, going from ``0.5`` to ``n+0.5``, as would be the case in ds9, for example. Along the spectral axis, this range will be modified if ``spectral_factor`` does not equal unity. When working with datasets in yt, it may be useful to convert world coordinates to pixel coordinates, so that whenever you may have to input a position in yt (e.g., for slicing or volume rendering) you can get the pixel coordinate that corresponds to the desired world coordinate. For this purpose, the method :meth:`~spectral_cube.ytcube.ytCube.world2yt` is provided:: >>> import astropy.units as u >>> pix_coord = ytcube.world2yt([51.424522, ... 30.723611, ... 5205.18071], # units of deg, deg, m/s ... ) # doctest: +SKIP There is also a reverse method provided, :meth:`~spectral_cube.ytcube.ytCube.yt2world`:: >>> world_coord = ytcube.yt2world([ds.domain_center]) # doctest: +SKIP which in this case would return the world coordinates of the center of the dataset in yt. .. TODO: add a way to center it on a specific coordinate and return in world .. coordinate offset. .. note:: The :meth:`~spectral_cube.SpectralCube.to_yt` method and its associated coordinate methods are compatible with both yt v. 2.x and v. 3.0 and following, but use of version 3.0 or later is recommended due to substantial improvements in support for FITS data. For more information on how yt handles FITS datasets, see `the yt docs `_. Visualization example --------------------- This section shows an example of a rendering script that can be used to produce a 3-d isocontour visualization using an object returned by :meth:`~spectral_cube.SpectralCube.to_yt`:: import numpy as np from spectral_cube import SpectralCube from yt.mods import ColorTransferFunction, write_bitmap import astropy.units as u # Read in spectral cube cube = SpectralCube.read('L1448_13CO.fits', format='fits') # Extract the yt object from the SpectralCube instance ytcube = cube.to_yt(spectral_factor=0.75) ds = ytcube.dataset # Set the number of levels, the minimum and maximum level and the width # of the isocontours n_v = 10 vmin = 0.05 vmax = 4.0 dv = 0.02 # Set up color transfer function transfer = ColorTransferFunction((vmin, vmax)) transfer.add_layers(n_v, dv, colormap='RdBu_r') # Set up the camera parameters # Derive the pixel coordinate of the desired center # from the corresponding world coordinate center = ytcube.world2yt([51.424522, 30.723611, 5205.18071]) direction = np.array([1.0, 0.0, 0.0]) width = 100. # pixels size = 1024 camera = ds.camera(center, direction, width, size, transfer, fields=['flux']) # Take a snapshot and save to a file snapshot = camera.snapshot() write_bitmap(snapshot, 'cube_rendering.png', transpose=True) You can move the camera around; see the `yt camera docs `_. Movie Making ------------ There is a simple utility for quick movie making. The default movie is a rotation of the cube around one of the spatial axes, going from PP -> PV space and back.:: >>> cube = read('cube.fits', format='fits') # doctest: +SKIP >>> ytcube = cube.to_yt() # doctest: +SKIP >>> images = ytcube.quick_render_movie('outdir') # doctest: +SKIP The movie only does rotation, but it is a useful stepping-stone if you wish to learn how to use yt's rendering system. Example: .. raw:: html SketchFab Isosurface Contours ----------------------------- For data exploration, making movies can be tedious - it is difficult to control the camera and expensive to generate new renderings. Instead, creating a 'model' from the data and exporting that to SketchFab can be very useful. Only grayscale figures will be created with the quicklook code. You need an account on sketchfab.com for this to work.:: >>> ytcube.quick_isocontour(title='GRS l=49 13CO 1 K contours', level=1.0) # doctest: +SKIP Here's an example: .. raw:: html

GRS l=49 13CO 1 K contours by keflavich on Sketchfab

You can also export locally to .ply and .obj files, which can be read by many programs (sketchfab, meshlab, blender). See the `yt page `_ for details.:: >>> ytcube.quick_isocontour(export_to='ply', filename='meshes.ply', level=1.0) # doctest: +SKIP >>> ytcube.quick_isocontour(export_to='obj', filename='meshes', level=1.0) # doctest: +SKIP ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1633017852.0 spectral-cube-0.6.0/pyproject.toml0000644000175100001710000000020400000000000016516 0ustar00runnerdocker[build-system] requires = ["setuptools", "setuptools_scm", "wheel"] build-backend = 'setuptools.build_meta' ././@PaxHeader0000000000000000000000000000003200000000000010210 xustar0026 mtime=1633017864.74903 spectral-cube-0.6.0/setup.cfg0000644000175100001710000000372400000000000015435 0ustar00runnerdocker[metadata] name = spectral-cube description = A package for interaction with spectral cubes long_description = file: README.rst author = Adam Ginsburg, Tom Robitaille, Chris Beaumont, Adam Leroy, Erik Rosolowsky, and Eric Koch author_email = adam.g.ginsburg@gmail.com license = BSD url = http://spectral-cube.readthedocs.org edit_on_github = False github_project = radio-astro-tools/spectral-cube [options] zip_safe = False packages = find: install_requires = astropy numpy>=1.8.0 radio_beam>=0.3.3 six dask[array] joblib casa-formats-io [options.extras_require] test = pytest-astropy pytest-cov docs = sphinx-astropy matplotlib novis = zarr fsspec distributed pvextractor regions reproject scipy all = zarr fsspec distributed aplpy glue-core[qt] matplotlib pvextractor regions reproject scipy yt ; python_version<'3.8' [options.package_data] spectral_cube.tests = data/* data/*/* spectral_cube.io.tests = data/*/* [upload_docs] upload-dir = docs/_build/html show-response = 1 [tool:pytest] minversion = 3.0 norecursedirs = build docs/_build doctest_plus = enabled addopts = -p no:warnings doctest_subpackage_requires = spectral_cube/vis*.py = aplpy [coverage:run] omit = spectral-cube/__init__* spectral-cube/conftest.py spectral-cube/*setup* spectral-cube/*/tests/* spectral-cube/tests/test_* spectral-cube/extern/* spectral-cube/utils/compat/* spectral-cube/version* spectral-cube/wcs/docstrings* spectral-cube/_erfa/* */spectral-cube/__init__* */spectral-cube/conftest.py */spectral-cube/*setup* */spectral-cube/*/tests/* */spectral-cube/tests/test_* */spectral-cube/extern/* */spectral-cube/utils/compat/* */spectral-cube/version* */spectral-cube/wcs/docstrings* */spectral-cube/_erfa/* [coverage:report] exclude_lines = pragma: no cover except ImportError raise AssertionError raise NotImplementedError def main\(.*\): pragma: py{ignore_python_version} def _ipython_key_completions_ [egg_info] tag_build = tag_date = 0 ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1633017852.0 spectral-cube-0.6.0/setup.py0000755000175100001710000000211600000000000015323 0ustar00runnerdocker#!/usr/bin/env python import os import sys from setuptools import setup TEST_HELP = """ Note: running tests is no longer done using 'python setup.py test'. Instead you will need to run: tox -e test If you don't already have tox installed, you can install it with: pip install tox If you only want to run part of the test suite, you can also use pytest directly with:: pip install -e . pytest For more information, see: http://docs.astropy.org/en/latest/development/testguide.html#running-tests """ if 'test' in sys.argv: print(TEST_HELP) sys.exit(1) DOCS_HELP = """ Note: building the documentation is no longer done using 'python setup.py build_docs'. Instead you will need to run: tox -e build_docs If you don't already have tox installed, you can install it with: pip install tox For more information, see: http://docs.astropy.org/en/latest/install.html#builddocs """ if 'build_docs' in sys.argv or 'build_sphinx' in sys.argv: print(DOCS_HELP) sys.exit(1) setup(use_scm_version={'write_to': os.path.join('spectral_cube', 'version.py')}) ././@PaxHeader0000000000000000000000000000003200000000000010210 xustar0026 mtime=1633017864.73303 spectral-cube-0.6.0/spectral_cube/0000755000175100001710000000000000000000000016421 5ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1633017852.0 spectral-cube-0.6.0/spectral_cube/__init__.py0000644000175100001710000000217300000000000020535 0ustar00runnerdocker# Licensed under a 3-clause BSD style license - see LICENSE.rst from ._astropy_init import __version__, test from pkg_resources import get_distribution, DistributionNotFound from .spectral_cube import (SpectralCube, VaryingResolutionSpectralCube) from .dask_spectral_cube import (DaskSpectralCube, DaskVaryingResolutionSpectralCube) from .stokes_spectral_cube import StokesSpectralCube from .masks import (MaskBase, InvertedMask, CompositeMask, BooleanArrayMask, LazyMask, LazyComparisonMask, FunctionMask) from .lower_dimensional_structures import (OneDSpectrum, Projection, Slice) # Import the following sub-packages to make sure the I/O functions are registered from .io import casa_image del casa_image from .io import class_lmv del class_lmv from .io import fits del fits __all__ = ['SpectralCube', 'VaryingResolutionSpectralCube', 'DaskSpectralCube', 'DaskVaryingResolutionSpectralCube', 'StokesSpectralCube', 'CompositeMask', 'LazyComparisonMask', 'LazyMask', 'BooleanArrayMask', 'FunctionMask', 'OneDSpectrum', 'Projection', 'Slice' ] ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1633017852.0 spectral-cube-0.6.0/spectral_cube/_astropy_init.py0000644000175100001710000000272300000000000021662 0ustar00runnerdocker# Licensed under a 3-clause BSD style license - see LICENSE.rst __all__ = ['__version__', '__githash__'] import os from warnings import warn from astropy.config.configuration import ( update_default_config, ConfigurationDefaultMissingError, ConfigurationDefaultMissingWarning) try: from .version import version as __version__ except ImportError: __version__ = '' # Create the test function for self test from astropy.tests.runner import TestRunner test = TestRunner.make_test_runner_in(os.path.dirname(__file__)) test.__test__ = False __all__ += ['test'] # add these here so we only need to cleanup the namespace at the end config_dir = None if not os.environ.get('ASTROPY_SKIP_CONFIG_UPDATE', False): config_dir = os.path.dirname(__file__) config_template = os.path.join(config_dir, __package__ + ".cfg") if os.path.isfile(config_template): try: update_default_config( __package__, config_dir, version=__version__) except TypeError as orig_error: try: update_default_config(__package__, config_dir) except ConfigurationDefaultMissingError as e: wmsg = (e.args[0] + " Cannot install default profile. If you are " "importing from source, this is expected.") warn(ConfigurationDefaultMissingWarning(wmsg)) del e except Exception: raise orig_error ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1633017852.0 spectral-cube-0.6.0/spectral_cube/_moments.py0000644000175100001710000001126500000000000020621 0ustar00runnerdockerfrom __future__ import print_function, absolute_import, division import numpy as np from .cube_utils import iterator_strategy from .np_compat import allbadtonan """ Functions to compute moment maps in a variety of ways """ def _moment_shp(cube, axis): """ Return the shape of the moment map Parameters ----------- cube : SpectralCube The cube to collapse axis : int The axis to collapse along (numpy convention) Returns ------- ny, nx """ return cube.shape[:axis] + cube.shape[axis + 1:] def _slice0(cube, axis): """ 0th moment along an axis, calculated slicewise Parameters ---------- cube : SpectralCube axis : int Returns ------- moment0 : array """ shp = _moment_shp(cube, axis) result = np.zeros(shp) view = [slice(None)] * 3 valid = np.zeros(shp, dtype=np.bool) for i in range(cube.shape[axis]): view[axis] = i plane = cube._get_filled_data(fill=np.nan, view=tuple(view)) valid |= np.isfinite(plane) result += np.nan_to_num(plane) * cube._pix_size_slice(axis) result[~valid] = np.nan return result def _slice1(cube, axis): """ 1st moment along an axis, calculated slicewise Parameters ---------- cube : SpectralCube axis : int Returns ------- moment1 : array """ shp = _moment_shp(cube, axis) result = np.zeros(shp) view = [slice(None)] * 3 pix_size = cube._pix_size_slice(axis) pix_cen = cube._pix_cen()[axis] weights = np.zeros(shp) for i in range(cube.shape[axis]): view[axis] = i plane = cube._get_filled_data(fill=0, view=tuple(view)) result += (plane * pix_cen[tuple(view)] * pix_size) weights += plane * pix_size return result / weights def moment_slicewise(cube, order, axis): """ Compute moments by accumulating the result 1 slice at a time """ if order == 0: return _slice0(cube, axis) if order == 1: return _slice1(cube, axis) shp = _moment_shp(cube, axis) result = np.zeros(shp) view = [slice(None)] * 3 pix_size = cube._pix_size_slice(axis) pix_cen = cube._pix_cen()[axis] weights = np.zeros(shp) # would be nice to get mom1 and momn in single pass over data # possible for mom2, not sure about general case mom1 = _slice1(cube, axis) for i in range(cube.shape[axis]): view[axis] = i plane = cube._get_filled_data(fill=0, view=tuple(view)) result += (plane * (pix_cen[tuple(view)] - mom1) ** order * pix_size) weights += plane * pix_size return (result / weights) def moment_raywise(cube, order, axis): """ Compute moments by accumulating the answer one ray at a time """ shp = _moment_shp(cube, axis) out = np.zeros(shp) * np.nan pix_cen = cube._pix_cen()[axis] pix_size = cube._pix_size_slice(axis) for x, y, slc in cube._iter_rays(axis): # the intensity, i.e. the weights include = cube._mask.include(data=cube._data, wcs=cube._wcs, view=slc, wcs_tolerance=cube._wcs_tolerance) if not include.any(): continue data = cube.flattened(slc).value * pix_size if order == 0: out[x, y] = data.sum() continue order1 = (data * pix_cen[slc][include]).sum() / data.sum() if order == 1: out[x, y] = order1 continue ordern = (data * (pix_cen[slc][include] - order1) ** order).sum() ordern /= data.sum() out[x, y] = ordern return out def moment_cubewise(cube, order, axis): """ Compute the moments by working with the entire data at once """ pix_cen = cube._pix_cen()[axis] data = cube._get_filled_data() * cube._pix_size_slice(axis) if order == 0: return allbadtonan(np.nansum)(data, axis=axis) if order == 1: return (np.nansum(data * pix_cen, axis=axis) / np.nansum(data, axis=axis)) else: mom1 = moment_cubewise(cube, 1, axis) # insert an axis so it broadcasts properly shp = list(_moment_shp(cube, axis)) shp.insert(axis, 1) mom1 = mom1.reshape(shp) return (np.nansum(data * (pix_cen - mom1) ** order, axis=axis) / np.nansum(data, axis=axis)) def moment_auto(cube, order, axis): """ Build a moment map, choosing a strategy to balance speed and memory. """ strategy = dict(cube=moment_cubewise, ray=moment_raywise, slice=moment_slicewise) return strategy[iterator_strategy(cube, axis)](cube, order, axis) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1633017852.0 spectral-cube-0.6.0/spectral_cube/analysis_utilities.py0000644000175100001710000003167100000000000022721 0ustar00runnerdockerimport numpy as np from astropy import units as u from six.moves import zip, range from astropy.wcs import WCS from astropy.utils.console import ProgressBar import warnings from .utils import BadVelocitiesWarning from .cube_utils import _map_context from .lower_dimensional_structures import VaryingResolutionOneDSpectrum, OneDSpectrum from .spectral_cube import VaryingResolutionSpectralCube def fourier_shift(x, shift, axis=0, add_pad=False, pad_size=None): ''' Shift a spectrum in the Fourier plane. Parameters ---------- x : np.ndarray Array to be shifted shift : int or float Number of pixels to shift. axis : int, optional Axis to shift along. pad_size : int, optional Pad the array before shifting. Returns ------- x2 : np.ndarray Shifted array. ''' nanmask = ~np.isfinite(x) # If all NaNs, there is nothing to shift # But only if there is no added padding. Otherwise we need to pad if nanmask.all() and not add_pad: return x nonan = x.copy() shift_mask = False if nanmask.any(): nonan[nanmask] = 0.0 shift_mask = True # Optionally pad the edges if add_pad: if pad_size is None: # Pad by the size of the shift pad = np.ceil(shift).astype(int) # Determine edge to pad whether it is a positive or negative shift pad_size = (pad, 0) if shift > 0 else (0, pad) else: assert len(pad_size) pad_nonan = np.pad(nonan, pad_size, mode='constant', constant_values=(0)) if shift_mask: pad_mask = np.pad(nanmask, pad_size, mode='constant', constant_values=(0)) else: pad_nonan = nonan pad_mask = nanmask # Check if there are all NaNs before shifting if nanmask.all(): return np.array([np.NaN] * pad_mask.size) nonan_shift = _fourier_shifter(pad_nonan, shift, axis) if shift_mask: mask_shift = _fourier_shifter(pad_mask, shift, axis) > 0.5 nonan_shift[mask_shift] = np.NaN return nonan_shift def _fourier_shifter(x, shift, axis): ''' Helper function for `~fourier_shift`. ''' ftx = np.fft.fft(x, axis=axis) m = np.fft.fftfreq(x.shape[axis]) # m_shape = [1] * x.ndim # m_shape[axis] = m.shape[0] # m = m.reshape(m_shape) slices = tuple([slice(None) if ii == axis else None for ii in range(x.ndim)]) m = m[slices] phase = np.exp(-2 * np.pi * m * 1j * shift) x2 = np.real(np.fft.ifft(ftx * phase, axis=axis)) return x2 def get_chunks(num_items, chunk): ''' Parameters ---------- num_items : int Number of total items. chunk : int Size of chunks Returns ------- chunks : list of np.ndarray List of channels in chunks of the given size. ''' items = np.arange(num_items) if num_items == chunk: return [items] chunks = \ np.array_split(items, [chunk * i for i in range(int(num_items / chunk))]) if chunks[-1].size == 0: # Last one is empty chunks = chunks[:-1] if chunks[0].size == 0: # First one is empty chunks = chunks[1:] return chunks def _spectrum_shifter(inputs): spec, shift, add_pad, pad_size = inputs return fourier_shift(spec, shift, add_pad=add_pad, pad_size=pad_size) def stack_spectra(cube, velocity_surface, v0=None, stack_function=np.nanmean, xy_posns=None, num_cores=1, chunk_size=-1, progressbar=False, pad_edges=True, vdiff_tol=0.01): ''' Shift spectra in a cube according to a given velocity surface (peak velocity, centroid, rotation model, etc.). Parameters ---------- cube : SpectralCube The cube velocity_field : Quantity A Quantity array with m/s or equivalent units stack_function : function A function that can operate over a list of numpy arrays (and accepts ``axis=0``) to combine the spectra. `numpy.nanmean` is the default, though one might consider `numpy.mean` or `numpy.median` as other options. xy_posns : list, optional List the spatial positions to include in the stack. For example, if the data is masked by some criterion, the valid points can be given as `xy_posns = np.where(mask)`. num_cores : int, optional Choose number of cores to run on. Defaults to 1. chunk_size : int, optional To limit memory usage, the shuffling of spectra can be done in chunks. Chunk size sets the number of spectra that, if memory-mapping is used, is the number of spectra loaded into memory. Defaults to -1, which is all spectra. progressbar : bool, optional Print progress through every chunk iteration. pad_edges : bool, optional Pad the edges of the shuffled spectra to stop data from rolling over. Default is True. The rolling over occurs since the FFT treats the boundary as periodic. This should only be disabled if you know that the velocity range exceeds the range that a spectrum has to be shuffled to reach `v0`. vdiff_tol : float, optional Allowed tolerance for changes in the spectral axis spacing. Default is 0.01, or 1%. Returns ------- stack_spec : OneDSpectrum The stacked spectrum. ''' if not np.isfinite(velocity_surface).any(): raise ValueError("velocity_surface contains no finite values.") vshape = velocity_surface.shape cshape = cube.shape[1:] if not (vshape == cshape): raise ValueError("Velocity surface map does not match cube spatial " "dimensions.") if xy_posns is None: # Only compute where a shift can be found xy_posns = np.where(np.isfinite(velocity_surface)) if v0 is None: # Set to the mean velocity of the cube if not given. v0 = cube.spectral_axis.mean() else: if not isinstance(v0, u.Quantity): raise u.UnitsError("v0 must be a quantity.") spec_unit = cube.spectral_axis.unit if not v0.unit.is_equivalent(spec_unit): raise u.UnitsError("v0 must have units equivalent to the cube's" " spectral unit ().".format(spec_unit)) v0 = v0.to(spec_unit) if v0 < cube.spectral_axis.min() or v0 > cube.spectral_axis.max(): raise ValueError("v0 must be within the range of the spectral " "axis of the cube.") # Calculate the pixel shifts that will be applied. spec_size = np.diff(cube.spectral_axis[:2])[0] # Assign the correct +/- for pixel shifts based on whether the spectral # axis is increasing (-1) or decreasing (+1) vdiff_sign = -1. if spec_size.value > 0. else 1. vdiff = np.abs(spec_size) vel_unit = vdiff.unit # Check to make sure vdiff doesn't change more than the allowed tolerance # over the spectral axis vdiff2 = np.abs(np.diff(cube.spectral_axis[-2:])[0]) if not np.isclose(vdiff2.value, vdiff.value, rtol=vdiff_tol): raise ValueError("Cannot shift spectra on a non-linear axes") vmax = cube.spectral_axis.to(vel_unit).max() vmin = cube.spectral_axis.to(vel_unit).min() if ((np.any(velocity_surface > vmax) or np.any(velocity_surface < vmin))): warnings.warn("Some velocities are outside the allowed range and will be " "masked out.", BadVelocitiesWarning) # issue 580/1 note: numpy <=1.16 will strip units from velocity, >= # 1.17 will not masked_velocities = np.where( (velocity_surface < vmax) & (velocity_surface > vmin), velocity_surface.value, np.nan) velocity_surface = u.Quantity(masked_velocities, velocity_surface.unit) pix_shifts = vdiff_sign * ((velocity_surface.to(vel_unit) - v0.to(vel_unit)) / vdiff).value[xy_posns] # May a header copy so we can start altering new_header = cube[:, 0, 0].header.copy() if pad_edges: # Enables padding the whole cube such that no spectrum will wrap around # This is critical if a low-SB component is far off of the bright # component that the velocity surface is derived from. # Find max +/- pixel shifts, rounding up to the nearest integer max_pos_shift = np.ceil(np.nanmax(pix_shifts)).astype(int) max_neg_shift = np.ceil(np.nanmin(pix_shifts)).astype(int) if max_neg_shift > 0: # if there are no negative shifts, we can ignore them and just # use the positive shift max_neg_shift = 0 if max_pos_shift < 0: # same for positive max_pos_shift = 0 # The total pixel size of the new spectral axis num_vel_pix = cube.spectral_axis.size + max_pos_shift - max_neg_shift new_header['NAXIS1'] = num_vel_pix # Adjust CRPIX in header new_header['CRPIX1'] += -max_neg_shift pad_size = (-max_neg_shift, max_pos_shift) else: pad_size = None all_shifted_spectra = [] if chunk_size == -1: chunk_size = len(xy_posns[0]) # Create chunks of spectra for read-out. chunks = get_chunks(len(xy_posns[0]), chunk_size) if progressbar: iterat = ProgressBar(chunks) else: iterat = chunks for i, chunk in enumerate(iterat): gen = ((cube.filled_data[:, y, x].value, shift, pad_edges, pad_size) for y, x, shift in zip(xy_posns[0][chunk], xy_posns[1][chunk], pix_shifts[chunk])) with _map_context(num_cores) as map: shifted_spectra = map(_spectrum_shifter, gen) all_shifted_spectra.extend([out for out in shifted_spectra]) shifted_spectra_array = np.array(all_shifted_spectra) assert shifted_spectra_array.ndim == 2 stacked = stack_function(shifted_spectra_array, axis=0) if hasattr(cube, 'beams'): stack_spec = VaryingResolutionOneDSpectrum(stacked, unit=cube.unit, wcs=WCS(new_header), header=new_header, meta=cube.meta, spectral_unit=vel_unit, beams=cube.beams) else: stack_spec = OneDSpectrum(stacked, unit=cube.unit, wcs=WCS(new_header), header=new_header, meta=cube.meta, spectral_unit=vel_unit, beam=cube.beam) return stack_spec def stack_cube(cube, linelist, vmin, vmax, average=np.nanmean, convolve_beam=None): """ Create a stacked cube by averaging on a common velocity grid. Parameters ---------- cube : SpectralCube The cube linelist : list of Quantities An iterable of Quantities representing line rest frequencies vmin / vmax : Quantity Velocity-equivalent quantities specifying the velocity range to average over average : function A function that can operate over a list of numpy arrays (and accepts ``axis=0``) to average the spectra. `numpy.nanmean` is the default, though one might consider `numpy.mean` or `numpy.median` as other options. convolve_beam : None If the cube is a VaryingResolutionSpectralCube, a convolution beam is required to put the cube onto a common grid prior to spectral interpolation. """ line_cube = cube.with_spectral_unit(u.km/u.s, velocity_convention='radio', rest_value=linelist[0]) if isinstance(line_cube, VaryingResolutionSpectralCube): if convolve_beam is None: raise ValueError("When stacking VaryingResolutionSpectralCubes, " "you must specify a target beam size with the " "keyword `convolve_beam`") reference_cube = line_cube.spectral_slab(vmin, vmax).convolve_to(convolve_beam) else: reference_cube = line_cube.spectral_slab(vmin, vmax) cutout_cubes = [reference_cube.filled_data[:].value] for restval in linelist[1:]: line_cube = cube.with_spectral_unit(u.km/u.s, velocity_convention='radio', rest_value=restval) line_cutout = line_cube.spectral_slab(vmin, vmax) if isinstance(line_cube, VaryingResolutionSpectralCube): line_cutout = line_cutout.convolve_to(convolve_beam) regridded = line_cutout.spectral_interpolate(reference_cube.spectral_axis) cutout_cubes.append(regridded.filled_data[:].value) stacked_cube = average(cutout_cubes, axis=0) hdu = reference_cube.hdu hdu.data = stacked_cube return hdu ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1633017852.0 spectral-cube-0.6.0/spectral_cube/base_class.py0000644000175100001710000007220600000000000021101 0ustar00runnerdockerfrom astropy import units as u from astropy import log import numpy as np import warnings import abc import astropy from astropy.io.fits import Card from radio_beam import Beam, Beams import dask.array as da from . import wcs_utils from . import cube_utils from .utils import BeamWarning, cached, WCSCelestialError, BeamAverageWarning, NoBeamError, BeamUnitsError from .masks import BooleanArrayMask __doctest_skip__ = ['SpatialCoordMixinClass.world'] __all__ = ['BaseNDClass', 'BeamMixinClass', 'HeaderMixinClass', 'MaskableArrayMixinClass', 'MultiBeamMixinClass', 'SpatialCoordMixinClass', 'SpectralAxisMixinClass', ] DOPPLER_CONVENTIONS = {} DOPPLER_CONVENTIONS['radio'] = u.doppler_radio DOPPLER_CONVENTIONS['optical'] = u.doppler_optical DOPPLER_CONVENTIONS['relativistic'] = u.doppler_relativistic class BaseNDClass(object): _cache = {} @property def _nowcs_header(self): """ Return a copy of the header with no WCS information attached """ log.debug("Stripping WCS from header") return wcs_utils.strip_wcs_from_header(self._header) @property def wcs(self): return self._wcs @property def meta(self): return self._meta @property def mask(self): return self._mask class HeaderMixinClass(object): """ A mixin class to provide header updating from WCS objects. The parent object must have a WCS. """ def wcs(self): raise TypeError("Classes inheriting from HeaderMixin must define a " "wcs method") @property def header(self): header = self._nowcs_header wcsheader = self.wcs.to_header() if self.wcs is not None else {} # When preserving metadata, copy over keywords before doing the WCS # keyword copying, since those have specific formatting requirements # and will overwrite these in many cases (e.g., BMAJ) for key in self.meta: if key.upper() not in wcsheader: if isinstance(key, str) and len(key) <= 8: try: header[key.upper()] = str(self.meta[key]) except ValueError as ex: # need a silenced-by-default warning here? # log.warn("Skipped key {0} because {1}".format(key, ex)) pass elif isinstance(key, str) and len(key) > 8: header['COMMENT'] = "{0}={1}".format(key, self.meta[key]) # Preserve non-WCS information from previous header iteration header.update(wcsheader) if self.unit == u.one and 'BUNIT' in self._meta: # preserve the BUNIT even though it's not technically valid # (Jy/Beam) header['BUNIT'] = self._meta['BUNIT'] else: header['BUNIT'] = self.unit.to_string(format='FITS') if 'beam' in self._meta: header = self._meta['beam'].attach_to_header(header) with warnings.catch_warnings(): warnings.simplefilter("ignore") header.insert(2, Card(keyword='NAXIS', value=self.ndim)) for ind,sh in enumerate(self.shape[::-1]): header.insert(3+ind, Card(keyword='NAXIS{0:1d}'.format(ind+1), value=sh)) return header def check_jybeam_smoothing(self, raise_error_jybm=True): ''' This runs for spatial resolution operations (e.g. `spatial_smooth`) and either an error or warning when smoothing will affect brightness in Jy/beam operations. This is also true for using the `with_beam` and `with_beams` methods, including 1D spectra with Jy/beam units. Parameters ---------- raise_error_jybeam : bool, optional Raises a `~spectral_cube.utils.BeamUnitsError` when True (default). When False, it triggers a `~spectral_cube.utils.BeamWarning`. .. note: This is a reminder to expose raise_error_jybm to top-level functions. ''' if self.unit.is_equivalent(u.Jy / u.beam) and raise_error_jybm: if raise_error_jybm: raise BeamUnitsError("Attempting to change the spatial resolution of a cube with Jy/beam units." " To ignore this error, set `raise_error_jybm=False`.") else: warnings.warn("Changing the spatial resolution of a cube with Jy/beam units." " The brightness units may be wrong!", BeamWarning) class SpatialCoordMixinClass(object): @property def _has_wcs_celestial(self): return self.wcs.has_celestial def _raise_wcs_no_celestial(self): if not self._has_wcs_celestial: raise WCSCelestialError("WCS does not contain two spatial axes.") def _celestial_axes(self): ''' Return the spatial axes in the data from the WCS object. The order of the spatial axes returned is [y, x]. ''' self._raise_wcs_no_celestial() # This works for astropy >v3 # wcs_cel_axis = [self.wcs.world_axis_physical_types.index(axtype) # for axtype in # self.wcs.celestial.world_axis_physical_types] # This works for all LTS releases wcs_cel_axis = [ax for ax, ax_type in enumerate(self.wcs.get_axis_types()) if ax_type['coordinate_type'] == 'celestial'] # Swap to numpy ordering # Since we're mapping backwards to get the numpy convention, we need to # reverse the order at the end. # 0 is the y spatial axis and 1 is the x spatial axis np_order_cel_axis = [self.ndim - 1 - ind for ind in wcs_cel_axis][::-1] return np_order_cel_axis @cube_utils.slice_syntax def world(self, view): """ Return a list of the world coordinates in a cube, projection, or a view of it. SpatialCoordMixinClass.world is called with *bracket notation*, like a NumPy array:: c.world[0:3, :, :] Returns ------- [v, y, x] : list of NumPy arrays The 3 world coordinates at each pixel in the view. For a 2D image, the output is ``[y, x]``. Examples -------- Extract the first 3 velocity channels of the cube: >>> v, y, x = c.world[0:3] Extract all the world coordinates: >>> v, y, x = c.world[:, :, :] Extract every other pixel along all axes: >>> v, y, x = c.world[::2, ::2, ::2] Extract all the world coordinates for a 2D image: >>> y, x = c.world[:, :] """ self._raise_wcs_no_celestial() # the next 3 lines are equivalent to (but more efficient than) # inds = np.indices(self._data.shape) # inds = [i[view] for i in inds] inds = np.ogrid[[slice(0, s) for s in self.shape]] inds = np.broadcast_arrays(*inds) inds = [i[view] for i in inds[::-1]] # numpy -> wcs order shp = inds[0].shape inds = np.column_stack([i.ravel() for i in inds]) world = self._wcs.all_pix2world(inds, 0).T world = [w.reshape(shp) for w in world] # 1D->3D # apply units world = [w * u.Unit(self._wcs.wcs.cunit[i]) for i, w in enumerate(world)] # convert spectral unit if needed if hasattr(self, "_spectral_unit"): if self._spectral_unit is not None: specind = self.wcs.wcs.spec world[specind] = world[specind].to(self._spectral_unit) return world[::-1] # reverse WCS -> numpy order def flattened_world(self, view=()): """ Retrieve the world coordinates corresponding to the extracted flattened version of the cube """ self._raise_wcs_no_celestial() return [wd_dim.ravel() for wd_dim in self.world[view]] def world_spines(self): """ Returns a list of 1D arrays, for the world coordinates along each pixel axis. Raises error if this operation is ill-posed (e.g. rotated world coordinates, strong distortions) This method is not currently implemented. Use ``world`` instead. """ raise NotImplementedError() @property def spatial_coordinate_map(self): view = tuple([0 for ii in range(self.ndim - 2)] + [slice(None)] * 2) return self.world[view][self.ndim - 2:] @property @cached def world_extrema(self): y_ax, x_ax = self._celestial_axes() corners = [(0, self.shape[x_ax]-1), (self.shape[y_ax]-1, 0), (self.shape[y_ax]-1, self.shape[x_ax]-1), (0,0)] if len(self.shape) == 2: latlon_corners = [self.world[y, x] for y,x in corners] else: latlon_corners = [self.world[0, y, x][1:] for y,x in corners] lon = u.Quantity([x for y,x in latlon_corners]) lat = u.Quantity([y for y,x in latlon_corners]) _lon_min = lon.min() _lon_max = lon.max() _lat_min = lat.min() _lat_max = lat.max() return u.Quantity(((_lon_min.to(u.deg).value, _lon_max.to(u.deg).value), (_lat_min.to(u.deg).value, _lat_max.to(u.deg).value)), u.deg) @property @cached def longitude_extrema(self): return self.world_extrema[0] @property @cached def latitude_extrema(self): return self.world_extrema[1] class SpectralAxisMixinClass(object): def _new_spectral_wcs(self, unit, velocity_convention=None, rest_value=None): """ Returns a new WCS with a different Spectral Axis unit Parameters ---------- unit : :class:`~astropy.units.Unit` Any valid spectral unit: velocity, (wave)length, or frequency. Only vacuum units are supported. velocity_convention : 'relativistic', 'radio', or 'optical' The velocity convention to use for the output velocity axis. Required if the output type is velocity. This can be either one of the above strings, or an `astropy.units` equivalency. rest_value : :class:`~astropy.units.Quantity` A rest wavelength or frequency with appropriate units. Required if output type is velocity. The cube's WCS should include this already if the *input* type is velocity, but the WCS's rest wavelength/frequency can be overridden with this parameter. .. note: This must be the rest frequency/wavelength *in vacuum*, even if your cube has air wavelength units """ from .spectral_axis import (convert_spectral_axis, determine_ctype_from_vconv) # Allow string specification of units, for example if not isinstance(unit, u.Unit): unit = u.Unit(unit) # Velocity conventions: required for frq <-> velo # convert_spectral_axis will handle the case of no velocity # convention specified & one is required if velocity_convention in DOPPLER_CONVENTIONS: velocity_convention = DOPPLER_CONVENTIONS[velocity_convention] elif (velocity_convention is not None and velocity_convention not in DOPPLER_CONVENTIONS.values()): raise ValueError("Velocity convention must be radio, optical, " "or relativistic.") # If rest value is specified, it must be a quantity if (rest_value is not None and (not hasattr(rest_value, 'unit') or not rest_value.unit.is_equivalent(u.m, u.spectral()))): raise ValueError("Rest value must be specified as an astropy " "quantity with spectral equivalence.") # Shorter versions to keep lines under 80 ctype_from_vconv = determine_ctype_from_vconv meta = self._meta.copy() if 'Original Unit' not in self._meta: meta['Original Unit'] = self._wcs.wcs.cunit[self._wcs.wcs.spec] meta['Original Type'] = self._wcs.wcs.ctype[self._wcs.wcs.spec] out_ctype = ctype_from_vconv(self._wcs.wcs.ctype[self._wcs.wcs.spec], unit, velocity_convention=velocity_convention) newwcs = convert_spectral_axis(self._wcs, unit, out_ctype, rest_value=rest_value) newwcs.wcs.set() return newwcs, meta @property def spectral_axis(self): # spectral objects should be forced to implement this raise NotImplementedError class MaskableArrayMixinClass(object): """ Mixin class for maskable arrays """ def _get_filled_data(self, view=(), fill=np.nan, check_endian=False, use_memmap=None): """ Return the underlying data as a numpy array. Always returns the spectral axis as the 0th axis Sets masked values to *fill* """ if check_endian: if not self._data.dtype.isnative: kind = str(self._data.dtype.kind) sz = str(self._data.dtype.itemsize) dt = '=' + kind + sz data = self._data.astype(dt) else: data = self._data else: data = self._data if self._mask is None: return data[view] if use_memmap is None and hasattr(self, '_is_huge'): use_memmap = self._is_huge return self._mask._filled(data=data, wcs=self._wcs, fill=fill, view=view, wcs_tolerance=self._wcs_tolerance, use_memmap=use_memmap ) @cube_utils.slice_syntax def filled_data(self, view): """ Return a portion of the data array, with excluded mask values replaced by ``fill_value``. Returns ------- data : Quantity The masked data. """ return u.Quantity(self._get_filled_data(view, fill=self._fill_value), self.unit, copy=False) def filled(self, fill_value=None): if fill_value is not None: return u.Quantity(self._get_filled_data(fill=fill_value), self.unit, copy=False) return self.filled_data[:] @cube_utils.slice_syntax def unitless_filled_data(self, view): """ Return a portion of the data array, with excluded mask values replaced by ``fill_value``. Returns ------- data : numpy.array The masked data. """ return self._get_filled_data(view, fill=self._fill_value) @property def fill_value(self): """ The replacement value used by `~spectral_cube.base_class.MaskableArrayMixinClass.filled_data`. fill_value is immutable; use `~spectral_cube.base_class.MaskableArrayMixinClass.with_fill_value` to create a new cube with a different fill value. """ return self._fill_value def with_fill_value(self, fill_value): """ Create a new object with a different ``fill_value``. Notes ----- This method is fast (it does not copy any data) """ return self._new_thing_with(fill_value=fill_value) @abc.abstractmethod def _new_thing_with(self): raise NotImplementedError class MultiBeamMixinClass(object): """ A mixin class to handle multibeam objects. To be used by VaryingResolutionSpectralCube's and OneDSpectrum's """ def jtok_factors(self, equivalencies=()): """ Compute an array of multiplicative factors that will convert from Jy/beam to K """ factors = [] for bm,frq in zip(self.beams, self.with_spectral_unit(u.Hz).spectral_axis): # create a beam equivalency for brightness temperature bmequiv = bm.jtok_equiv(frq) factor = (u.Jy).to(u.K, equivalencies=bmequiv+list(equivalencies)) factors.append(factor) factor = np.array(factors) return factor @property def beams(self): return self._beams[self.goodbeams_mask] @beams.setter def beams(self, obj): if not isinstance(obj, Beams): raise TypeError("beam must be a radio_beam.Beams object.") if not obj.size == self.shape[0]: raise ValueError("The Beams object must have the same size as the " "data. Found a size of {0} and the data have a " "size of {1}".format(obj.size, self.size)) self._beams = obj @property @cached def pixels_per_beam(self): pixels_per_beam = [(beam.sr / (astropy.wcs.utils.proj_plane_pixel_area(self.wcs) * u.deg**2)).to(u.one).value for beam in self.beams] return pixels_per_beam @property def unmasked_beams(self): return self._beams @property def goodbeams_mask(self): if hasattr(self, '_goodbeams_mask'): return self._goodbeams_mask else: return self.unmasked_beams.isfinite @goodbeams_mask.setter def goodbeams_mask(self, value): if value.size != self.shape[0]: raise ValueError("The 'good beams' mask must have the same size " "as the cube's spectral dimension") self._goodbeams_mask = value def identify_bad_beams(self, threshold, reference_beam=None, criteria=['sr','major','minor'], mid_value=np.nanmedian): """ Mask out any layers in the cube that have beams that differ from the central value of the beam by more than the specified threshold. Parameters ---------- threshold : float Fractional threshold reference_beam : Beam A beam to use as the reference. If unspecified, ``mid_value`` will be used to select a middle beam criteria : list A list of criteria to compare. Can include 'sr','major','minor','pa' or any subset of those. mid_value : function The function used to determine the 'mid' value to compare to. This will identify the middle-valued beam area/major/minor/pa. Returns ------- includemask : np.array A boolean array where ``True`` indicates the good beams """ includemask = np.ones(self.unmasked_beams.size, dtype='bool') all_criteria = {'sr','major','minor','pa'} if not set.issubset(set(criteria), set(all_criteria)): raise ValueError("Criteria must be one of the allowed options: " "{0}".format(all_criteria)) props = {prop: u.Quantity([getattr(beam, prop) for beam in self.unmasked_beams]) for prop in all_criteria} if reference_beam is None: reference_beam = Beam(major=mid_value(props['major']), minor=mid_value(props['minor']), pa=mid_value(props['pa']) ) for prop in criteria: val = props[prop] mid = getattr(reference_beam, prop) diff = np.abs((val-mid)/mid) assert diff.shape == includemask.shape includemask[diff > threshold] = False return includemask def average_beams(self, threshold, mask='compute', warn=False): """ Average the beams. Note that this operation only makes sense in limited contexts! Generally one would want to convolve all the beams to a common shape, but this method is meant to handle the "simple" case when all your beams are the same to within some small factor and can therefore be arithmetically averaged. Parameters ---------- threshold : float The fractional difference between beam major, minor, and pa to permit mask : 'compute', None, or boolean array The mask to apply to the beams. Useful for excluding bad channels and edge beams. warn : bool Warn if successful? Returns ------- new_beam : radio_beam.Beam A new radio beam object that is the average of the unmasked beams """ use_dask = isinstance(self._data, da.Array) if mask == 'compute': if use_dask: # If we are dealing with dask arrays, we compute the beam # mask once and for all since it is used multiple times in its # entirety in the remainder of this method. beam_mask = da.any(da.logical_and(self._mask_include, self.goodbeams_mask[:, None, None]), axis=(1, 2)) # da.any appears to return an object dtype instead of a bool beam_mask = self._compute(beam_mask).astype('bool') elif self.mask is not None: beam_mask = np.any(np.logical_and(self.mask.include(), self.goodbeams_mask[:, None, None]), axis=(1, 2)) else: beam_mask = self.goodbeams_mask else: if mask.ndim > 1: beam_mask = np.logical_and(mask, self.goodbeams_mask[:, None, None]) else: beam_mask = np.logical_and(mask, self.goodbeams_mask) # use private _beams here because the public one excludes the bad beams # by default new_beam = self._beams.average_beam(includemask=beam_mask) if np.isnan(new_beam): raise ValueError("Beam was not finite after averaging. " "This either indicates that there was a problem " "with the include mask, one of the beam's values, " "or a bug.") self._check_beam_areas(threshold, mean_beam=new_beam, mask=beam_mask) if warn: warnings.warn("Arithmetic beam averaging is being performed. This is " "not a mathematically robust operation, but is being " "permitted because the beams differ by " "<{0}".format(threshold), BeamAverageWarning ) return new_beam def _handle_beam_areas_wrapper(self, function, beam_threshold=None): """ Wrapper: if the function takes "axis" and is operating over axis 0 (the spectral axis), check that the beam threshold is not exceeded before performing the operation Also, if the operation *is* valid, average the beam appropriately to get the output """ # deferred import to avoid a circular import problem from .lower_dimensional_structures import LowerDimensionalObject if beam_threshold is None: beam_threshold = self.beam_threshold def newfunc(*args, **kwargs): """ Wrapper function around the standard operations to handle beams when creating projections """ # check that the spectral axis is being operated over. If it is, # we need to average beams # moments are a special case b/c they default to axis=0 need_to_handle_beams = (('axis' in kwargs and ((kwargs['axis']==0) or (hasattr(kwargs['axis'], '__len__') and 0 in kwargs['axis']))) or ('axis' not in kwargs and 'moment' in function.__name__)) if need_to_handle_beams: # do this check *first* so we don't do an expensive operation # and crash afterward avg_beam = self.average_beams(beam_threshold, warn=True) result = function(*args, **kwargs) if not isinstance(result, LowerDimensionalObject): # numpy arrays are sometimes returned; these have no metadata return result elif need_to_handle_beams: result.meta['beam'] = avg_beam result._beam = avg_beam return result return newfunc def _check_beam_areas(self, threshold, mean_beam, mask=None): """ Check that the beam areas are the same to within some threshold """ if mask is not None: assert len(mask) == len(self.unmasked_beams) mask = np.array(mask, dtype='bool') else: mask = np.ones(len(self.unmasked_beams), dtype='bool') qtys = dict(sr=self.unmasked_beams.sr, major=self.unmasked_beams.major.to(u.deg), minor=self.unmasked_beams.minor.to(u.deg), # position angles are not really comparable #pa=u.Quantity([bm.pa for bm in self.unmasked_beams], u.deg), ) errormessage = "" for (qtyname, qty) in (qtys.items()): minv = qty[mask].min() maxv = qty[mask].max() mn = getattr(mean_beam, qtyname) maxdiff = (np.max(np.abs(u.Quantity((maxv-mn, minv-mn))))/mn).decompose() if isinstance(threshold, dict): th = threshold[qtyname] else: th = threshold if maxdiff > th: errormessage += ("Beam {2}s differ by up to {0}x, which is greater" " than the threshold {1}\n".format(maxdiff, threshold, qtyname )) if errormessage != "": raise ValueError(errormessage) def mask_out_bad_beams(self, threshold, reference_beam=None, criteria=['sr','major','minor'], mid_value=np.nanmedian): """ See `identify_bad_beams`. This function returns a masked cube Returns ------- newcube : VaryingResolutionSpectralCube The cube with bad beams masked out """ goodbeams = self.identify_bad_beams(threshold=threshold, reference_beam=reference_beam, criteria=criteria, mid_value=mid_value) includemask = BooleanArrayMask(goodbeams[:, None, None], self._wcs, shape=self._data.shape) use_dask = isinstance(self._data, da.Array) if use_dask: newmask = da.logical_and(self._mask_include, includemask) elif self.mask is None: newmask = includemask else: newmask = np.bitwise_and(self.mask, includemask) return self._new_thing_with(mask=newmask, beam_threshold=threshold, goodbeams_mask=np.bitwise_and(self.goodbeams_mask, goodbeams), ) def with_beams(self, beams, goodbeams_mask=None, raise_error_jybm=True): ''' Attach a new beams object to the VaryingResolutionSpectralCube. Parameters ---------- beams : `~radio_beam.Beams` A new beams object. ''' # Catch cases with units in Jy/beam where new beams will alter the units. self.check_jybeam_smoothing(raise_error_jybm=raise_error_jybm) meta = self.meta.copy() meta['beams'] = beams return self._new_thing_with(beams=beams, meta=meta) @abc.abstractmethod def _new_thing_with(self): # since the above two methods require this method, it's an ABC of this # mixin as well raise NotImplementedError class BeamMixinClass(object): """ Functionality for objects with a single beam. Specific objects (cubes, LDOs) still need to define their own ``with_beam`` methods. """ @property def beam(self): if self._beam is None: raise NoBeamError("No beam is defined for this SpectralCube or the" " beam information could not be parsed from the" " header. A `~radio_beam.Beam` object can be" " added using `cube.with_beam`.") return self._beam @beam.setter def beam(self, obj): if not isinstance(obj, Beam) and obj is not None: raise TypeError("beam must be a radio_beam.Beam object.") self._beam = obj @property @cached def pixels_per_beam(self): return (self.beam.sr / (astropy.wcs.utils.proj_plane_pixel_area(self.wcs) * u.deg**2)).to(u.one).value ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1633017852.0 spectral-cube-0.6.0/spectral_cube/conftest.py0000644000175100001710000004137400000000000020631 0ustar00runnerdocker# this contains imports plugins that configure py.test for astropy tests. # by importing them here in conftest.py they are discoverable by py.test # no matter how it is invoked within the source tree. from __future__ import print_function, absolute_import, division import os from distutils.version import LooseVersion from astropy.units.equivalencies import pixel_scale # Import casatools and casatasks here if available as they can otherwise # cause a segfault if imported later on during tests. try: import casatools import casatasks except ImportError: pass import pytest import numpy as np from astropy.io import fits from astropy import wcs from astropy import units from astropy.version import version as astropy_version if astropy_version < '3.0': from astropy.tests.pytest_plugins import * del pytest_report_header else: from pytest_astropy_header.display import PYTEST_HEADER_MODULES, TESTED_VERSIONS @pytest.fixture(params=[False, True]) def use_dask(request): # Fixture to run tests that use this fixture with both SpectralCube and # DaskSpectralCube return request.param def pytest_configure(config): config.option.astropy_header = True PYTEST_HEADER_MODULES['Astropy'] = 'astropy' PYTEST_HEADER_MODULES['regions'] = 'regions' PYTEST_HEADER_MODULES['APLpy'] = 'aplpy' HEADER_FILENAME = os.path.join(os.path.dirname(__file__), 'tests', 'data', 'header_jybeam.hdr') def transpose(d, h, axes): d = d.transpose(np.argsort(axes)) h2 = h.copy() for i in range(len(axes)): for key in ['NAXIS', 'CDELT', 'CRPIX', 'CRVAL', 'CTYPE', 'CUNIT']: h2['%s%i' % (key, i + 1)] = h['%s%i' % (key, axes[i] + 1)] return d, h2 def prepare_4_beams(): beams = np.recarray(4, dtype=[('BMAJ', '>f4'), ('BMIN', '>f4'), ('BPA', '>f4'), ('CHAN', '>i4'), ('POL', '>i4')]) beams['BMAJ'] = [0.4,0.3,0.3,0.4] # arcseconds beams['BMIN'] = [0.1,0.2,0.2,0.1] beams['BPA'] = [0,45,60,30] # degrees beams['CHAN'] = [0,1,2,3] beams['POL'] = [0,0,0,0] beams = fits.BinTableHDU(beams) beams.header['TTYPE1'] = 'BMAJ' beams.header['TUNIT1'] = 'arcsec' beams.header['TTYPE2'] = 'BMIN' beams.header['TUNIT2'] = 'arcsec' beams.header['TTYPE3'] = 'BPA' beams.header['TUNIT3'] = 'deg' return beams def prepare_advs_data(): # Single Stokes h = fits.header.Header.fromtextfile(HEADER_FILENAME) h['BUNIT'] = 'K' # Kelvins are a valid unit, JY/BEAM are not: they should be tested separately h['NAXIS1'] = 2 h['NAXIS2'] = 3 h['NAXIS3'] = 4 h['NAXIS4'] = 1 np.random.seed(42) d = np.random.random((1, 2, 3, 4)) return d, h @pytest.fixture def data_advs(tmp_path): d, h = prepare_advs_data() fits.writeto(tmp_path / 'advs.fits', d, h) return tmp_path / 'advs.fits' @pytest.fixture def data_dvsa(tmp_path): d, h = prepare_advs_data() d, h = transpose(d, h, [1, 2, 3, 0]) fits.writeto(tmp_path / 'dvsa.fits', d, h) return tmp_path / 'dvsa.fits' @pytest.fixture def data_vsad(tmp_path): d, h = prepare_advs_data() d, h = transpose(d, h, [1, 2, 3, 0]) d, h = transpose(d, h, [1, 2, 3, 0]) fits.writeto(tmp_path / 'vsad.fits', d, h) return tmp_path / 'vsad.fits' @pytest.fixture def data_sadv(tmp_path): d, h = prepare_advs_data() d, h = transpose(d, h, [1, 2, 3, 0]) d, h = transpose(d, h, [1, 2, 3, 0]) d, h = transpose(d, h, [1, 2, 3, 0]) fits.writeto(tmp_path / 'sadv.fits', d, h) return tmp_path / 'sadv.fits' @pytest.fixture def data_sdav(tmp_path): d, h = prepare_advs_data() d, h = transpose(d, h, [1, 2, 3, 0]) d, h = transpose(d, h, [1, 2, 3, 0]) d, h = transpose(d, h, [1, 2, 3, 0]) d, h = transpose(d, h, [0, 2, 1, 3]) fits.writeto(tmp_path / 'sdav.fits', d, h) return tmp_path / 'sdav.fits' @pytest.fixture def data_sdav_beams(tmp_path): d, h = prepare_advs_data() d, h = transpose(d, h, [1, 2, 3, 0]) d, h = transpose(d, h, [1, 2, 3, 0]) d, h = transpose(d, h, [1, 2, 3, 0]) d, h = transpose(d, h, [0, 2, 1, 3]) del h['BMAJ'], h['BMIN'], h['BPA'] # want 4 spectral channels np.random.seed(42) d = np.random.random((4, 3, 2, 1)) beams = prepare_4_beams() hdul = fits.HDUList([fits.PrimaryHDU(data=d, header=h), beams]) hdul.writeto(tmp_path / 'sdav_beams.fits') return tmp_path / 'sdav_beams.fits' @pytest.fixture def data_advs_nobeam(tmp_path): d, h = prepare_advs_data() del h['BMAJ'] del h['BMIN'] del h['BPA'] fits.writeto(tmp_path / 'advs_nobeam.fits', d, h) return tmp_path / 'advs_nobeam.fits' def prepare_adv_data(): h = fits.header.Header.fromtextfile(HEADER_FILENAME) h['BUNIT'] = 'K' # Kelvins are a valid unit, JY/BEAM are not: they should be tested separately h['NAXIS1'] = 2 h['NAXIS2'] = 3 h['NAXIS3'] = 4 h['NAXIS'] = 3 for k in list(h.keys()): if k.endswith('4'): del h[k] np.random.seed(96) d = np.random.random((4, 3, 2)) return d, h @pytest.fixture def data_adv(tmp_path): d, h = prepare_adv_data() fits.writeto(tmp_path / 'adv.fits', d, h) return tmp_path / 'adv.fits' @pytest.fixture def data_adv_simple(tmp_path): d, h = prepare_adv_data() d.flat[:] = np.arange(d.size) fits.writeto(tmp_path / 'adv_simple.fits', d, h) return tmp_path / 'adv_simple.fits' @pytest.fixture def data_adv_jybeam_upper(tmp_path): d, h = prepare_adv_data() h['BUNIT'] = 'JY/BEAM' fits.writeto(tmp_path / 'adv_JYBEAM_upper.fits', d, h) return tmp_path / 'adv_JYBEAM_upper.fits' @pytest.fixture def data_adv_jybeam_lower(tmp_path): d, h = prepare_adv_data() h['BUNIT'] = 'Jy/beam' fits.writeto(tmp_path / 'adv_Jybeam_lower.fits', d, h) return tmp_path / 'adv_Jybeam_lower.fits' @pytest.fixture def data_adv_jybeam_whitespace(tmp_path): d, h = prepare_adv_data() h['BUNIT'] = ' Jy / beam ' fits.writeto(tmp_path / 'adv_Jybeam_whitespace.fits', d, h) return tmp_path / 'adv_Jybeam_whitespace.fits' @pytest.fixture def data_adv_beams(tmp_path): d, h = prepare_adv_data() bmaj, bmin, bpa = h['BMAJ'], h['BMIN'], h['BPA'] del h['BMAJ'], h['BMIN'], h['BPA'] beams = prepare_4_beams() hdul = fits.HDUList([fits.PrimaryHDU(data=d, header=h), beams]) hdul.writeto(tmp_path / 'adv_beams.fits') return tmp_path / 'adv_beams.fits' @pytest.fixture def data_vad(tmp_path): d, h = prepare_adv_data() d, h = transpose(d, h, [2, 0, 1]) fits.writeto(tmp_path / 'vad.fits', d, h) return tmp_path / 'vad.fits' @pytest.fixture def data_vda(tmp_path): d, h = prepare_adv_data() d, h = transpose(d, h, [2, 0, 1]) d, h = transpose(d, h, [2, 1, 0]) fits.writeto(tmp_path / 'vda.fits', d, h) return tmp_path / 'vda.fits' @pytest.fixture def data_vda_jybeam_upper(tmp_path): d, h = prepare_adv_data() d, h = transpose(d, h, [2, 0, 1]) d, h = transpose(d, h, [2, 1, 0]) h['BUNIT'] = 'JY/BEAM' fits.writeto(tmp_path / 'vda_JYBEAM_upper.fits', d, h) return tmp_path / 'vda_JYBEAM_upper.fits' @pytest.fixture def data_vda_jybeam_lower(tmp_path): d, h = prepare_adv_data() d, h = transpose(d, h, [2, 0, 1]) d, h = transpose(d, h, [2, 1, 0]) h['BUNIT'] = 'Jy/beam' fits.writeto(tmp_path / 'vda_Jybeam_lower.fits', d, h) return tmp_path / 'vda_Jybeam_lower.fits' @pytest.fixture def data_vda_jybeam_whitespace(tmp_path): d, h = prepare_adv_data() d, h = transpose(d, h, [2, 0, 1]) d, h = transpose(d, h, [2, 1, 0]) h['BUNIT'] = ' Jy / beam ' fits.writeto(tmp_path / 'vda_Jybeam_whitespace.fits', d, h) return tmp_path / 'vda_Jybeam_whitespace.fits' @pytest.fixture def data_vda_beams(tmp_path): d, h = prepare_adv_data() d, h = transpose(d, h, [2, 0, 1]) d, h = transpose(d, h, [2, 1, 0]) h['BUNIT'] = ' Jy / beam ' del h['BMAJ'], h['BMIN'], h['BPA'] beams = prepare_4_beams() hdul = fits.HDUList([fits.PrimaryHDU(data=d, header=h), beams]) hdul.writeto(tmp_path / 'vda_beams.fits') return tmp_path / 'vda_beams.fits' @pytest.fixture def data_vda_beams_image(tmp_path): d, h = prepare_adv_data() d, h = transpose(d, h, [2, 0, 1]) d, h = transpose(d, h, [2, 1, 0]) h['BUNIT'] = ' Jy / beam ' del h['BMAJ'], h['BMIN'], h['BPA'] beams = prepare_4_beams() hdul = fits.HDUList([fits.PrimaryHDU(data=d, header=h), beams]) hdul.writeto(tmp_path / 'vda_beams.fits') from casatools import image ia = image() ia.fromfits(infile=tmp_path / 'vda_beams.fits', outfile=tmp_path / 'vda_beams.image', overwrite=True) for (bmaj, bmin, bpa, chan, pol) in beams.data: ia.setrestoringbeam(major={'unit': 'arcsec', 'value': bmaj}, minor={'unit': 'arcsec', 'value': bmin}, pa={'unit': 'deg', 'value': bpa}, channel=chan, polarization=pol) ia.close() return tmp_path / 'vda_beams.image' def prepare_255_header(): # make a version with spatial pixels h = fits.header.Header.fromtextfile(HEADER_FILENAME) for k in list(h.keys()): if k.endswith('4'): del h[k] h['BUNIT'] = 'K' # Kelvins are a valid unit, JY/BEAM are not: they should be tested separately return h @pytest.fixture def data_255(tmp_path): h = prepare_255_header() d = np.arange(2*5*5, dtype='float').reshape((2,5,5)) fits.writeto(tmp_path / '255.fits', d, h) return tmp_path / '255.fits' @pytest.fixture def data_255_delta(tmp_path): h = prepare_255_header() # test cube for convolution, regridding d = np.zeros([2,5,5], dtype='float') d[0,2,2] = 1.0 fits.writeto(tmp_path / '255_delta.fits', d, h) return tmp_path / '255_delta.fits' @pytest.fixture def data_455_delta_beams(tmp_path): h = prepare_255_header() # test cube for convolution, regridding d = np.zeros([4,5,5], dtype='float') d[:,2,2] = 1.0 beams = prepare_4_beams() hdul = fits.HDUList([fits.PrimaryHDU(data=d, header=h), beams]) hdul.writeto(tmp_path / '455_delta_beams.fits') return tmp_path / '455_delta_beams.fits' @pytest.fixture def data_455_degree_beams(tmp_path): """ Test cube for AIPS-style beam specfication """ h = prepare_255_header() d = np.zeros([4,5,5], dtype='float') beams = prepare_4_beams() beams.data['BMAJ'] /= 3600 beams.data['BMIN'] /= 3600 beams.header['TTYPE1'] = 'BMAJ' beams.header['TUNIT1'] = 'DEGREES' beams.header['TTYPE2'] = 'BMIN' beams.header['TUNIT2'] = 'DEGREES' hdul = fits.HDUList([fits.PrimaryHDU(data=d, header=h), beams]) hdul.writeto(tmp_path / '455_degree_beams.fits') return tmp_path / '455_degree_beams.fits' @pytest.fixture def data_522_delta(tmp_path): h = prepare_255_header() d = np.zeros([5,2,2], dtype='float') d[2,:,:] = 1.0 fits.writeto(tmp_path / '522_delta.fits', d, h) return tmp_path / '522_delta.fits' def prepare_5_beams(): beams = np.recarray(5, dtype=[('BMAJ', '>f4'), ('BMIN', '>f4'), ('BPA', '>f4'), ('CHAN', '>i4'), ('POL', '>i4')]) beams['BMAJ'] = [0.5,0.4,0.3,0.4,0.5] # arcseconds beams['BMIN'] = [0.1,0.2,0.3,0.2,0.1] beams['BPA'] = [0,45,60,30,0] # degrees beams['CHAN'] = [0,1,2,3,4] beams['POL'] = [0,0,0,0,0] beams = fits.BinTableHDU(beams) beams.header['TTYPE1'] = 'BMAJ' beams.header['TUNIT1'] = 'arcsec' beams.header['TTYPE2'] = 'BMIN' beams.header['TUNIT2'] = 'arcsec' beams.header['TTYPE3'] = 'BPA' beams.header['TUNIT3'] = 'deg' return beams @pytest.fixture def data_522_delta_beams(tmp_path): h = prepare_255_header() d = np.zeros([5,2,2], dtype='float') d[2,:,:] = 1.0 beams = prepare_5_beams() hdul = fits.HDUList([fits.PrimaryHDU(data=d, header=h), beams]) hdul.writeto(tmp_path / '522_delta_beams.fits') return tmp_path / '522_delta_beams.fits' def prepare_55_header(): h = fits.header.Header.fromtextfile(HEADER_FILENAME) for k in list(h.keys()): if k.endswith('4') or k.endswith('3'): del h[k] h['BUNIT'] = 'K' return h @pytest.fixture def data_55(tmp_path): # Make a 2D spatial version h = prepare_55_header() d = np.arange(5 * 5, dtype='float').reshape((5, 5)) fits.writeto(tmp_path / '55.fits', d, h) return tmp_path / '55.fits' @pytest.fixture def data_55_delta(tmp_path): # test cube for convolution, regridding h = prepare_55_header() d = np.zeros([5, 5], dtype='float') d[2, 2] = 1.0 fits.writeto(tmp_path / '55_delta.fits', d, h) return tmp_path / '55_delta.fits' def prepare_5_header(): h = wcs.WCS(fits.Header.fromtextfile(HEADER_FILENAME)).sub([wcs.WCSSUB_SPECTRAL]).to_header() return h @pytest.fixture def data_5_spectral(tmp_path): # oneD spectra h = prepare_5_header() d = np.arange(5, dtype='float') fits.writeto(tmp_path / '5_spectral.fits', d, h) return tmp_path / '5_spectral.fits' @pytest.fixture def data_5_spectral_beams(tmp_path): h = prepare_5_header() d = np.arange(5, dtype='float') beams = prepare_5_beams() hdul = fits.HDUList([fits.PrimaryHDU(data=d, header=h), beams]) hdul.writeto(tmp_path / '5_spectral_beams.fits') return tmp_path / '5_spectral_beams.fits' def prepare_5_beams_with_pixscale(pixel_scale): beams = np.recarray(5, dtype=[('BMAJ', '>f4'), ('BMIN', '>f4'), ('BPA', '>f4'), ('CHAN', '>i4'), ('POL', '>i4')]) pixel_scale = pixel_scale.to(units.arcsec).value beams['BMAJ'] = [3.5 * pixel_scale,3 * pixel_scale,3 * pixel_scale,3 * pixel_scale,3 * pixel_scale] # arcseconds beams['BMIN'] = [2 * pixel_scale,2.5 * pixel_scale,3 * pixel_scale,2.5 * pixel_scale,2 * pixel_scale] beams['BPA'] = [0,45,60,30,0] # degrees beams['CHAN'] = [0,1,2,3,4] beams['POL'] = [0,0,0,0,0] beams = fits.BinTableHDU(beams) beams.header['TTYPE1'] = 'BMAJ' beams.header['TUNIT1'] = 'arcsec' beams.header['TTYPE2'] = 'BMIN' beams.header['TUNIT2'] = 'arcsec' beams.header['TTYPE3'] = 'BPA' beams.header['TUNIT3'] = 'deg' return beams @pytest.fixture def point_source_5_spectral_beams(tmp_path): from radio_beam import Beams from astropy.convolution import convolve_fft h = fits.header.Header.fromtextfile(HEADER_FILENAME) h['BUNIT'] = "Jy/beam" d = np.zeros((5, 11, 11), dtype=float) d[:, 5, 5] = 1. # NOTE: this matches the header. Should take that directly from the header instead of setting. pixel_scale = 2. * units.arcsec beams = prepare_5_beams_with_pixscale(pixel_scale) for i, beam in enumerate(Beams.from_fits_bintable(beams)): # Convolve point source to the beams. d[i] = convolve_fft(d[i], beam.as_kernel(pixel_scale)) # Correct for the beam area in Jy/beam # So effectively Jy / pixel -> Jy/beam pix_to_beam = beam.sr.to(units.arcsec**2) / pixel_scale**2 d[i] *= pix_to_beam.value # Ensure that the scaling is correct. The center pixel should remain ~1. np.testing.assert_allclose(d[:, 5, 5], 1., atol=1e-5) hdul = fits.HDUList([fits.PrimaryHDU(data=d, header=h), beams]) hdul.writeto(tmp_path / 'point_source_conv_5_spectral_beams.fits') return tmp_path / 'point_source_conv_5_spectral_beams.fits' @pytest.fixture def point_source_5_one_beam(tmp_path): from radio_beam import Beam from astropy.convolution import convolve_fft h = fits.header.Header.fromtextfile(HEADER_FILENAME) h['BUNIT'] = "Jy/beam" d = np.zeros((5, 11, 11), dtype=float) d[:, 5, 5] = 1. # NOTE: this matches the header. Should take that directly from the header instead of setting. pixel_scale = 2. * units.arcsec beam = Beam(3 * pixel_scale) beamprops = beam.to_header_keywords() for key in beamprops: h[key] = beamprops[key] for i in range(5): # Convolve point source to the beams. d[i] = convolve_fft(d[i], beam.as_kernel(pixel_scale)) # Correct for the beam area in Jy/beam # So effectively Jy / pixel -> Jy/beam pix_to_beam = beam.sr.to(units.arcsec**2) / pixel_scale**2 d[i] *= pix_to_beam.value # Ensure that the scaling is correct. The center pixel should remain ~1. np.testing.assert_allclose(d[:, 5, 5], 1., atol=1e-5) hdul = fits.PrimaryHDU(data=d, header=h) hdul.writeto(tmp_path / 'point_source_conv_5_one_beam.fits') return tmp_path / 'point_source_conv_5_one_beam.fits' ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1633017852.0 spectral-cube-0.6.0/spectral_cube/cube_utils.py0000644000175100001710000005762700000000000021152 0ustar00runnerdockerfrom __future__ import print_function, absolute_import, division import contextlib import warnings from copy import deepcopy try: import builtins except ImportError: # python2 import __builtin__ as builtins import dask.array as da import numpy as np from astropy.wcs.utils import proj_plane_pixel_area from astropy.wcs import (WCSSUB_SPECTRAL, WCSSUB_LONGITUDE, WCSSUB_LATITUDE) from . import wcs_utils from .utils import FITSWarning, AstropyUserWarning, WCSCelestialError from astropy import log from astropy.io import fits from astropy.wcs.utils import is_proj_plane_distorted from astropy.io.fits import BinTableHDU, Column from astropy import units as u import itertools import re from radio_beam import Beam def _fix_spectral(wcs): """ Attempt to fix a cube with an invalid spectral axis definition. Only uses well-known exceptions, e.g. CTYPE = 'VELOCITY'. For the rest, it will try to raise a helpful error. """ axtypes = wcs.get_axis_types() types = [a['coordinate_type'] for a in axtypes] if wcs.naxis not in (3, 4): raise TypeError("The WCS has {0} axes of types {1}".format(len(types), types)) # sanitize noncompliant headers if 'spectral' not in types: log.warning("No spectral axis found; header may be non-compliant.") for ind,tp in enumerate(types): if tp not in ('celestial','stokes'): if wcs.wcs.ctype[ind] in wcs_utils.bad_spectypes_mapping: wcs.wcs.ctype[ind] = wcs_utils.bad_spectypes_mapping[wcs.wcs.ctype[ind]] return wcs def _split_stokes(array, wcs): """ Given a 4-d data cube with 4-d WCS (spectral cube + stokes) return a dictionary of data and WCS objects for each Stokes component Parameters ---------- array : `~numpy.ndarray` The input 3-d array with two position dimensions, one spectral dimension, and a Stokes dimension. wcs : `~astropy.wcs.WCS` The input 3-d WCS with two position dimensions, one spectral dimension, and a Stokes dimension. """ if array.ndim not in (3,4): raise ValueError("Input array must be 3- or 4-dimensional for a" " STOKES cube") if wcs.wcs.naxis != 4: raise ValueError("Input WCS must be 4-dimensional for a STOKES cube") wcs = _fix_spectral(wcs) # reverse from wcs -> numpy convention axtypes = wcs.get_axis_types()[::-1] types = [a['coordinate_type'] for a in axtypes] try: # Find stokes dimension stokes_index = types.index('stokes') except ValueError: # stokes not in list, but we are 4d if types.count('celestial') == 2 and types.count('spectral') == 1: if None in types: stokes_index = types.index(None) log.warning("FITS file has no STOKES axis, but it has a blank" " axis type at index {0} that is assumed to be " "stokes.".format(4-stokes_index)) else: for ii,tp in enumerate(types): if tp not in ('celestial', 'spectral'): stokes_index = ii stokes_type = tp log.warning("FITS file has no STOKES axis, but it has an axis" " of type {1} at index {0} that is assumed to be " "stokes.".format(4-stokes_index, stokes_type)) else: raise IOError("There are 4 axes in the data cube but no STOKES " "axis could be identified") # TODO: make the stokes names more general stokes_names = ["I", "Q", "U", "V"] stokes_arrays = {} wcs_slice = wcs_utils.drop_axis(wcs, wcs.naxis - 1 - stokes_index) if array.ndim == 4: for i_stokes in range(array.shape[stokes_index]): array_slice = [i_stokes if idim == stokes_index else slice(None) for idim in range(array.ndim)] stokes_arrays[stokes_names[i_stokes]] = array[tuple(array_slice)] else: # 3D array with STOKES as a 4th header parameter stokes_arrays['I'] = array return stokes_arrays, wcs_slice def _orient(array, wcs): """ Given a 3-d spectral cube and WCS, swap around the axes so that the spectral axis cube is the first in Numpy notation, and the last in WCS notation. Parameters ---------- array : `~numpy.ndarray` The input 3-d array with two position dimensions and one spectral dimension. wcs : `~astropy.wcs.WCS` The input 3-d WCS with two position dimensions and one spectral dimension. """ if array.ndim != 3: raise ValueError("Input array must be 3-dimensional") if wcs.wcs.naxis != 3: raise ValueError("Input WCS must be 3-dimensional") wcs = wcs_utils.diagonal_wcs_to_cdelt(_fix_spectral(wcs)) # reverse from wcs -> numpy convention axtypes = wcs.get_axis_types()[::-1] types = [a['coordinate_type'] for a in axtypes] n_celestial = types.count('celestial') if n_celestial == 0: raise ValueError('No celestial axes found in WCS') elif n_celestial != 2: raise ValueError('WCS should contain 2 celestial dimensions but ' 'contains {0}'.format(n_celestial)) n_spectral = types.count('spectral') if n_spectral == 0: raise ValueError('No spectral axes found in WCS') elif n_spectral != 1: raise ValueError('WCS should contain one spectral dimension but ' 'contains {0}'.format(n_spectral)) nums = [None if a['coordinate_type'] != 'celestial' else a['number'] for a in axtypes] if 'stokes' in types: raise ValueError("Input WCS should not contain stokes") t = [types.index('spectral'), nums.index(1), nums.index(0)] if t == [0, 1, 2]: result_array = array else: result_array = array.transpose(t) result_wcs = wcs.sub([WCSSUB_LONGITUDE, WCSSUB_LATITUDE, WCSSUB_SPECTRAL]) return result_array, result_wcs def slice_syntax(f): """ This decorator wraps a function that accepts a tuple of slices. After wrapping, the function acts like a property that accepts bracket syntax (e.g., p[1:3, :, :]) Parameters ---------- f : function """ def wrapper(self): result = SliceIndexer(f, self) result.__doc__ = f.__doc__ return result wrapper.__doc__ = slice_doc.format(f.__doc__ or '', f.__name__) result = property(wrapper) return result slice_doc = """ {0} Notes ----- Supports efficient Numpy slice notation, like ``{1}[0:3, :, 2:4]`` """ class SliceIndexer(object): def __init__(self, func, _other): self._func = func self._other = _other def __getitem__(self, view): result = self._func(self._other, view) if isinstance(result, da.Array): result = result.compute() return result @property def size(self): return self._other.size @property def ndim(self): return self._other.ndim @property def shape(self): return self._other.shape def __iter__(self): raise Exception("You need to specify a slice (e.g. ``[:]`` or " "``[0,:,:]`` in order to access this property.") # TODO: make this into a proper configuration item # TODO: make threshold depend on memory? MEMORY_THRESHOLD=1e8 def is_huge(cube): if cube.size < MEMORY_THRESHOLD: # smallish return False else: return True def iterator_strategy(cube, axis=None): """ Guess the most efficient iteration strategy for iterating over a cube, given its size and layout Parameters ---------- cube : SpectralCube instance The cube to iterate over axis : [0, 1, 2] For reduction methods, the axis that is being collapsed Returns ------- strategy : ['cube' | 'ray' | 'slice'] The recommended iteration strategy. *cube* recommends working with the entire array in memory *slice* recommends working with one slice at a time *ray* recommends working with one ray at a time """ # pretty simple for now if cube.size < 1e8: # smallish return 'cube' return 'slice' def try_load_beam(header): ''' Try loading a beam from a FITS header. ''' try: beam = Beam.from_fits_header(header) return beam except Exception as ex: # We don't emit a warning if no beam was found since it's ok for # cubes to not have beams # if 'No BMAJ' not in str(ex): # warnings.warn("Could not parse beam information from header." # " Exception was: {0}".format(ex.__repr__()), # FITSWarning # ) # Avoid warning since cubes don't have a beam # Warning now provided when `SpectralCube.beam` is None beam = None return beam def try_load_beams(data): ''' Try loading a beam table from a FITS HDU list. ''' try: from radio_beam import Beam except ImportError: warnings.warn("radio_beam is not installed. No beam " "can be created.", ImportError ) if isinstance(data, fits.BinTableHDU): if 'BPA' in data.data.names: beam_table = data.data return beam_table else: raise ValueError("No beam table found") elif isinstance(data, fits.HDUList): for ihdu, hdu_item in enumerate(data): if isinstance(hdu_item, (fits.PrimaryHDU, fits.ImageHDU)): beam = try_load_beams(hdu_item.header) elif isinstance(hdu_item, fits.BinTableHDU): if 'BPA' in hdu_item.data.names: beam_table = hdu_item.data return beam_table try: # if there was a beam in a header, but not a beam table return beam except NameError: # if the for loop has completed, we didn't find a beam table raise ValueError("No beam table found") elif isinstance(data, (fits.PrimaryHDU, fits.ImageHDU)): return try_load_beams(data.header) elif isinstance(data, fits.Header): try: beam = Beam.from_fits_header(data) return beam except Exception as ex: # warnings.warn("Could not parse beam information from header." # " Exception was: {0}".format(ex.__repr__()), # FITSWarning # ) # Avoid warning since cubes don't have a beam # Warning now provided when `SpectralCube.beam` is None beam = None else: raise ValueError("How did you get here? This is some sort of error.") def beams_to_bintable(beams): """ Convert a list of beams to a CASA-style BinTableHDU """ c1 = Column(name='BMAJ', format='1E', array=[bm.major.to(u.arcsec).value for bm in beams], unit=u.arcsec.to_string('FITS')) c2 = Column(name='BMIN', format='1E', array=[bm.minor.to(u.arcsec).value for bm in beams], unit=u.arcsec.to_string('FITS')) c3 = Column(name='BPA', format='1E', array=[bm.pa.to(u.deg).value for bm in beams], unit=u.deg.to_string('FITS')) #c4 = Column(name='CHAN', format='1J', array=[bm.meta['CHAN'] if 'CHAN' in bm.meta else 0 for bm in beams]) c4 = Column(name='CHAN', format='1J', array=np.arange(len(beams))) c5 = Column(name='POL', format='1J', array=[bm.meta['POL'] if 'POL' in bm.meta else 0 for bm in beams]) bmhdu = BinTableHDU.from_columns([c1, c2, c3, c4, c5]) bmhdu.header['EXTNAME'] = 'BEAMS' bmhdu.header['EXTVER'] = 1 bmhdu.header['XTENSION'] = 'BINTABLE' bmhdu.header['NCHAN'] = len(beams) bmhdu.header['NPOL'] = len(set([bm.meta['POL'] for bm in beams if 'POL' in bm.meta])) return bmhdu def beam_props(beams, includemask=None): ''' Returns separate quantities for the major, minor, and PA of a list of beams. ''' if includemask is None: includemask = itertools.cycle([True]) major = u.Quantity([bm.major for bm, incl in zip(beams, includemask) if incl], u.deg) minor = u.Quantity([bm.minor for bm, incl in zip(beams, includemask) if incl], u.deg) pa = u.Quantity([bm.pa for bm, incl in zip(beams, includemask) if incl], u.deg) return major, minor, pa def largest_beam(beams, includemask=None): """ Returns the largest beam (by area) in a list of beams. """ from radio_beam import Beam major, minor, pa = beam_props(beams, includemask) largest_idx = (major * minor).argmax() new_beam = Beam(major=major[largest_idx], minor=minor[largest_idx], pa=pa[largest_idx]) return new_beam def smallest_beam(beams, includemask=None): """ Returns the smallest beam (by area) in a list of beams. """ from radio_beam import Beam major, minor, pa = beam_props(beams, includemask) smallest_idx = (major * minor).argmin() new_beam = Beam(major=major[smallest_idx], minor=minor[smallest_idx], pa=pa[smallest_idx]) return new_beam @contextlib.contextmanager def _map_context(numcores): """ Mapping context manager to allow parallel mapping or regular mapping depending on the number of cores specified. The builtin map is overloaded to handle python3 problems: python3 returns a generator, while ``multiprocessing.Pool.map`` actually runs the whole thing """ if numcores is not None and numcores > 1: try: from joblib import Parallel, delayed from joblib.pool import has_shareable_memory map = lambda x,y: Parallel(n_jobs=numcores)(delayed(has_shareable_memory)(x))(y) parallel = True except ImportError: map = lambda x,y: list(builtins.map(x,y)) warnings.warn("Could not import joblib. " "map will be non-parallel.", ImportError ) parallel = False else: parallel = False map = lambda x,y: list(builtins.map(x,y)) yield map def convert_bunit(bunit): ''' Convert a BUNIT string to a quantity Parameters ---------- bunit : str String to convert to an `~astropy.units.Unit` Returns ------- unit : `~astropy.unit.Unit` Corresponding unit. ''' # special case: CASA (sometimes) makes non-FITS-compliant jy/beam headers bunit_lower = re.sub(r"\s", "", bunit.lower()) if bunit_lower == 'jy/beam': unit = u.Jy / u.beam else: try: unit = u.Unit(bunit) except ValueError: warnings.warn("Could not parse unit {0}".format(bunit), AstropyUserWarning) unit = None return unit def world_take_along_axis(cube, position_plane, axis): ''' Convert a 2D plane of pixel positions to the equivalent WCS coordinates. For example, this will convert `argmax` along the spectral axis to the equivalent spectral value (e.g., velocity at peak intensity). Parameters ---------- cube : SpectralCube A spectral cube. position_plane : 2D numpy.ndarray 2D array of pixel positions along `axis`. For example, `position_plane` can be the output of `argmax` or `argmin` along an axis. axis : int The axis that `position_plane` is collapsed along. Returns ------- out : astropy.units.Quantity 2D array of WCS coordinates. ''' if wcs_utils.is_pixel_axis_to_wcs_correlated(cube.wcs, axis): raise WCSCelestialError("world_take_along_axis requires the celestial axes" " to be aligned along image axes.") # Get 1D slice along that axis. world_slice = [0, 0] world_slice.insert(axis, slice(None)) world_coords = cube.world[tuple(world_slice)][axis] world_newaxis = [np.newaxis] * 2 world_newaxis.insert(axis, slice(None)) world_newaxis = tuple(world_newaxis) plane_newaxis = [slice(None), slice(None)] plane_newaxis.insert(axis, np.newaxis) plane_newaxis = tuple(plane_newaxis) out = np.take_along_axis(world_coords[world_newaxis], position_plane[plane_newaxis], axis=axis) out = out.squeeze() return out def _has_beam(obj): if hasattr(obj, '_beam'): return obj._beam is not None else: return False def _has_beams(obj): if hasattr(obj, '_beams'): return obj._beams is not None else: return False def bunit_converters(obj, unit, equivalencies=(), freq=None): ''' Handler for all brightness unit conversions, including: K, Jy/beam, Jy/pix, Jy/sr. This also includes varying resolution spectral cubes, where the beam size varies along the frequency axis. Parameters ---------- obj : {SpectralCube, LowerDimensionalObject} A spectral cube or any other lower dimensional object. unit : `~astropy.units.Unit` Unit to convert `obj` to. equivalencies : tuple, optional Initial list of equivalencies. freq : `~astropy.unit.Quantity`, optional Frequency to use for spectral conversions. If the spectral axis is available, the frequencies will already be defined. Outputs ------- factor : `~numpy.ndarray` Array of factors for the unit conversion. ''' # Add a simple check it the new unit is already equivalent, and so we don't need # any additional unit equivalencies if obj.unit.is_equivalent(unit): # return equivalencies factor = obj.unit.to(unit, equivalencies=equivalencies) return np.array([factor]) # Determine the bunit "type". This will determine what information we need for the unit conversion. has_btemp = obj.unit.is_equivalent(u.K) or unit.is_equivalent(u.K) has_perbeam = obj.unit.is_equivalent(u.Jy/u.beam) or unit.is_equivalent(u.Jy/u.beam) has_perangarea = obj.unit.is_equivalent(u.Jy/u.sr) or unit.is_equivalent(u.Jy/u.sr) has_perpix = obj.unit.is_equivalent(u.Jy/u.pix) or unit.is_equivalent(u.Jy/u.pix) # Is there any beam object defined? has_beam = _has_beam(obj) or _has_beams(obj) # Set if this is a varying resolution object has_beams = _has_beams(obj) # Define freq, if needed: if any([has_perangarea, has_perbeam, has_btemp]): # Create a beam equivalency for brightness temperature # This requires knowing the frequency along the spectral axis. if freq is None: try: freq = obj.with_spectral_unit(u.Hz).spectral_axis except AttributeError: raise TypeError("Object of type {0} has no spectral " "information. `freq` must be provided for" " unit conversion from Jy/beam" .format(type(obj))) else: if not freq.unit.is_equivalent(u.Hz): raise u.UnitsError("freq must be given in equivalent " "frequency units.") freq = freq.reshape((-1,)) else: freq = [None] # To handle varying resolution objects, loop through "channels" # Default to a single iteration for a 2D spatial object or when a beam is not defined # This allows handling all 1D, 2D, and 3D data products. if has_beams: iter = range(len(obj.beams)) beams = obj.beams elif has_beam: iter = range(0, 1) beams = [obj.beam] else: iter = range(0, 1) beams = [None] # Append the unit conversion factors factors = [] # Iterate through spectral channels. for ii in iter: beam = beams[ii] # Use the range of frequencies when the beam does not change. Otherwise, select the # frequency corresponding to this beam. if has_beams: thisfreq = freq[ii] else: thisfreq = freq # Changes in beam require a new equivalency for each. this_equivalencies = deepcopy(equivalencies) # Equivalencies for Jy per ang area. if has_perangarea: bmequiv_angarea = u.brightness_temperature(thisfreq) this_equivalencies = list(this_equivalencies) + bmequiv_angarea # Beam area equivalencies for Jy per beam and/or Jy per ang area if has_perbeam or has_perangarea: if not has_beam: raise ValueError("To convert cubes with Jy/beam units, " "the cube needs to have a beam defined.") # create a beam equivalency for brightness temperature bmequiv = beam.jtok_equiv(thisfreq) # NOTE: `beamarea_equiv` was included in the radio-beam v0.3.3 release # The if/else here handles potential cases where earlier releases are installed. if hasattr(beam, 'beamarea_equiv'): bmarea_equiv = beam.beamarea_equiv else: bmarea_equiv = u.beam_angular_area(beam.sr) this_equivalencies = list(this_equivalencies) + bmequiv + bmarea_equiv # Equivalencies for Jy per pixel area. if has_perpix: if not obj.wcs.has_celestial: raise ValueError("Spatial WCS information is required for unit conversions" " involving spatial areas (e.g., Jy/pix, Jy/sr)") pix_area = (proj_plane_pixel_area(obj.wcs.celestial) * u.deg**2).to(u.sr) pix_area_equiv = [(u.Jy / u.pix, u.Jy / u.sr, lambda x: x / pix_area.value, lambda x: x * pix_area.value)] this_equivalencies = list(this_equivalencies) + pix_area_equiv # Define full from brightness temp to Jy / pix. # Otherwise isn't working in 1 step if has_btemp: if not has_beam: raise ValueError("Conversions between K and Jy/beam or Jy/pix" "requires the cube to have a beam defined.") jtok_factor = beam.jtok(thisfreq) / (u.Jy / u.beam) # We're going to do this piecemeal because it's easier to conceptualize # We specifically anchor these conversions based on the beam area. So from # beam to pix, this is beam -> angular area -> area per pixel # Altogether: # K -> Jy/beam -> Jy /sr - > Jy / pix forward_factor = 1 / (jtok_factor * (beam.sr / u.beam) / (pix_area / u.pix)) reverse_factor = jtok_factor * (beam.sr / u.beam) / (pix_area / u.pix) pix_area_btemp_equiv = [(u.K, u.Jy / u.pix, lambda x: x * forward_factor.value, lambda x: x * reverse_factor.value)] this_equivalencies = list(this_equivalencies) + pix_area_btemp_equiv # Equivalencies between pixel and angular areas. if has_perbeam: if not has_beam: raise ValueError("Conversions between Jy/beam or Jy/pix" "requires the cube to have a beam defined.") beam_area = beam.sr pix_area_btemp_equiv = [(u.Jy / u.pix, u.Jy / u.beam, lambda x: x * (beam_area / pix_area).value, lambda x: x * (pix_area / beam_area).value)] this_equivalencies = list(this_equivalencies) + pix_area_btemp_equiv factor = obj.unit.to(unit, equivalencies=this_equivalencies) factors.append(factor) if has_beams: return factors else: # Slice along first axis to return a 1D array. return factors[0] ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1633017852.0 spectral-cube-0.6.0/spectral_cube/dask_spectral_cube.py0000644000175100001710000017327700000000000022631 0ustar00runnerdocker""" A class to represent a 3-d position-position-velocity spectral cube. """ from __future__ import print_function, absolute_import, division import uuid import inspect import warnings import tempfile from functools import wraps from contextlib import contextmanager from astropy import units as u from astropy.io.fits import PrimaryHDU, HDUList from astropy.wcs.utils import proj_plane_pixel_area import numpy as np import dask import dask.array as da from astropy import stats from astropy import convolution from astropy import wcs from . import wcs_utils from .spectral_cube import SpectralCube, VaryingResolutionSpectralCube, SIGMA2FWHM, np2wcs from .utils import cached, VarianceWarning, SliceWarning, BeamWarning, SmoothingWarning, BeamUnitsError from .lower_dimensional_structures import Projection from .masks import BooleanArrayMask, is_broadcastable_and_smaller from .np_compat import allbadtonan __all__ = ['DaskSpectralCube', 'DaskVaryingResolutionSpectralCube'] try: from scipy import ndimage import scipy.interpolate SCIPY_INSTALLED = True except ImportError: SCIPY_INSTALLED = False try: import zarr import fsspec except ImportError: ZARR_INSTALLED = False else: ZARR_INSTALLED = True def nansum_allbadtonan(dask_array, axis=None, keepdims=None): return da.reduction(dask_array, allbadtonan(np.nansum), allbadtonan(np.nansum), axis=axis, dtype=dask_array.dtype) def ignore_warnings(function): @wraps(function) def wrapper(self, *args, **kwargs): with warnings.catch_warnings(): warnings.simplefilter('ignore') return function(self, *args, **kwargs) return wrapper def add_save_to_tmp_dir_option(function): @wraps(function) def wrapper(self, *args, **kwargs): save_to_tmp_dir = kwargs.pop('save_to_tmp_dir', False) cube = function(self, *args, **kwargs) if save_to_tmp_dir and isinstance(cube, DaskSpectralCubeMixin): if not ZARR_INSTALLED: raise ImportError("saving the cube to a temporary directory " "requires the zarr and fsspec packages to " "be installed.") filename = tempfile.mktemp() with dask.config.set(**cube._scheduler_kwargs): cube._data.to_zarr(filename) cube._data = da.from_zarr(filename) return cube return wrapper def projection_if_needed(function): # check if function defines default projection kwargs parameters = inspect.signature(function).parameters if 'projection' in parameters: default_projection = parameters['projection'].default else: default_projection = True if 'unit' in parameters: default_unit = parameters['unit'].default else: default_unit = 'self' @wraps(function) def wrapper(self, *args, **kwargs): projection = kwargs.get('projection', default_projection) unit = kwargs.get('unit', default_unit) if unit == 'self': unit = self.unit out = function(self, *args, **kwargs) axis = kwargs.get('axis') if isinstance(out, da.Array): out = self._compute(out) if axis is None: # return is scalar if unit is not None: return u.Quantity(out, unit=unit) else: return out elif projection and axis is not None and self._naxes_dropped(axis) in (1, 2): meta = {'collapse_axis': axis} meta.update(self._meta) if hasattr(axis, '__len__') and len(axis) == 2: # if operation is over two spatial dims if set(axis) == set((1, 2)): new_wcs = self._wcs.sub([wcs.WCSSUB_SPECTRAL]) header = self._nowcs_header if hasattr(self, '_beam') and self._beam is not None: bmarg = {'beam': self.beam} elif hasattr(self, '_beams') and self._beams is not None: bmarg = {'beams': self.unmasked_beams} else: bmarg = {} return self._oned_spectrum(value=out, wcs=new_wcs, copy=False, unit=unit, header=header, meta=meta, spectral_unit=self._spectral_unit, **bmarg ) else: warnings.warn("Averaging over a spatial and a spectral " "dimension cannot produce a Projection " "quantity (no units or WCS are preserved).", SliceWarning) return out else: new_wcs = wcs_utils.drop_axis(self._wcs, np2wcs[axis]) header = self._nowcs_header return Projection(out, copy=False, wcs=new_wcs, meta=meta, unit=unit, header=header) else: return out return wrapper class FilledArrayHandler: """ This class is a wrapper for the data and mask which can be used to initialize a dask array. It provides a way for the filled data to be constructed just for the requested chunks. """ def __init__(self, cube, fill=np.nan): self._data = cube._data self._mask = cube._mask self._fill = fill self._wcs = cube._wcs self._wcs_tolerance = cube._wcs_tolerance self.shape = cube._data.shape self.dtype = cube._data.dtype self.ndim = len(self.shape) def __getitem__(self, view): if self._data[view].size == 0: return 0. else: return self._mask._filled(data=self._data, view=view, wcs=self._wcs, fill=self._fill, wcs_tolerance=self._wcs_tolerance) class MaskHandler: """ This class is a wrapper for the mask which can be used to initialize a dask array. It provides a way for the mask to be computed just for the requested chunk. """ def __init__(self, cube): self._data = cube._data self._mask = cube.mask self.shape = cube._data.shape self.dtype = cube._data.dtype self.ndim = len(self.shape) def __getitem__(self, view): if self._data[view].size == 0: return False else: result = self._mask.include(view=view) if isinstance(result, da.Array): result = result.compute() return result class DaskSpectralCubeMixin: _scheduler_kwargs = {'scheduler': 'synchronous'} def _new_cube_with(self, *args, **kwargs): # The scheduler should be preserved for cubes produced as a result # of this one. new_cube = super()._new_cube_with(*args, **kwargs) new_cube._scheduler_kwargs = self._scheduler_kwargs return new_cube @property def _data(self): return self.__data @_data.setter def _data(self, value): if not isinstance(value, da.Array): raise TypeError('_data should be set to a dask array') self.__data = value def use_dask_scheduler(self, scheduler, num_workers=None): """ Set the dask scheduler to use. Can be used as a function or a context manager. Parameters ---------- scheduler : str Any valid dask scheduler. See https://docs.dask.org/en/latest/scheduler-overview.html for an overview of available schedulers. num_workers : int Number of workers to use for the 'threads' and 'processes' schedulers. """ original_scheduler_kwargs = self._scheduler_kwargs self._scheduler_kwargs = {'scheduler': scheduler} if num_workers is not None: self._scheduler_kwargs['num_workers'] = num_workers self._num_workers = num_workers class SchedulerHandler: def __init__(self, cube, original_scheduler_kwargs): self.cube = cube self.original_scheduler_kwargs = original_scheduler_kwargs def __enter__(self): pass def __exit__(self, *args): self.cube._scheduler_kwargs = self.original_scheduler_kwargs return SchedulerHandler(self, original_scheduler_kwargs) def _compute(self, array): return array.compute(**self._scheduler_kwargs) def _warn_slow(self, funcname): if self._is_huge and not self.allow_huge_operations: raise ValueError("This function ({0}) requires loading the entire " "cube into memory, and the cube is large ({1} " "pixels), so by default we disable this operation. " "To enable the operation, set " "`cube.allow_huge_operations=True` and try again." .format(funcname, self.size)) def _get_filled_data(self, view=(), fill=np.nan, check_endian=None, use_memmap=None): if check_endian: if not self._data.dtype.isnative: kind = str(self._data.dtype.kind) sz = str(self._data.dtype.itemsize) dt = '=' + kind + sz data = self._data.astype(dt) else: data = self._data else: data = self._data if self._mask is None: return data[view] else: return da.from_array(FilledArrayHandler(self, fill=fill), name='FilledArrayHandler ' + str(uuid.uuid4()), chunks=data.chunksize)[view] def __repr__(self): default_repr = super().__repr__() lines = default_repr.splitlines() lines[0] = lines[0][:-1] + ' and chunk size {0}:'.format(self._data.chunksize) return '\n'.join(lines) @add_save_to_tmp_dir_option def rechunk(self, chunks='auto', threshold=None, block_size_limit=None, **kwargs): """ Rechunk the underlying dask array and return a new cube. For more details about the parameters below, see the dask documentation about `rechunking `_. Parameters ---------- chunks: int, tuple, dict or str, optional The new block dimensions to create. -1 indicates the full size of the corresponding dimension. Default is "auto" which automatically determines chunk sizes. This can also be a tuple with a different value along each dimension - for example if computing moment maps, you could use e.g. ``chunks=(-1, 'auto', 'auto')`` threshold: int, optional The graph growth factor under which we don't bother introducing an intermediate step. block_size_limit: int, optional The maximum block size (in bytes) we want to produce Defaults to the dask configuration value ``array.chunk-size`` save_to_tmp_dir : bool If `True`, the rechunking will be carried out straight away and saved to a temporary directory. This can improve performance, especially if carrying out several operations sequentially. If `False`, the rechunking is added as a step in the dask tree. kwargs Additional keyword arguments are passed to the dask rechunk method. """ newdata = self._data.rechunk(chunks=chunks, threshold=threshold, block_size_limit=block_size_limit) return self._new_cube_with(data=newdata) @add_save_to_tmp_dir_option @projection_if_needed def apply_function(self, function, axis=None, unit=None, projection=False, keep_shape=False, **kwargs): """ Apply a function to valid data along the specified axis or to the whole cube, optionally using a weight array that is the same shape (or at least can be sliced in the same way) Parameters ---------- function : function A function that can be applied to a numpy array. Does not need to be nan-aware axis : 1, 2, 3, or None The axis to operate along. If None, the return is scalar. unit : (optional) `~astropy.units.Unit` The unit of the output projection or value. Not all functions should return quantities with units. projection : bool Return a projection if the resulting array is 2D? keep_shape : bool If `True`, the returned object will be the same dimensionality as the cube. save_to_tmp_dir : bool If `True`, the computation will be carried out straight away and saved to a temporary directory. This can improve performance, especially if carrying out several operations sequentially. If `False`, the computation is only carried out when accessing specific parts of the data or writing to disk. Returns ------- result : :class:`~spectral_cube.lower_dimensional_structures.Projection` or `~astropy.units.Quantity` or float The result depends on the value of ``axis``, ``projection``, and ``unit``. If ``axis`` is None, the return will be a scalar with or without units. If axis is an integer, the return will be a :class:`~spectral_cube.lower_dimensional_structures.Projection` if ``projection`` is set """ if axis is None: out = function(self.flattened(), **kwargs) if unit is not None: return u.Quantity(out, unit=unit) else: return out data = self._get_filled_data(fill=self._fill_value) if keep_shape: newdata = da.apply_along_axis(function, axis, data, shape=(self.shape[axis],)) else: newdata = da.apply_along_axis(function, axis, data) return newdata @add_save_to_tmp_dir_option @projection_if_needed def apply_numpy_function(self, function, fill=np.nan, projection=False, unit=None, check_endian=False, **kwargs): """ Apply a numpy function to the cube Parameters ---------- function : Numpy ufunc A numpy ufunc to apply to the cube fill : float The fill value to use on the data projection : bool Return a :class:`~spectral_cube.lower_dimensional_structures.Projection` if the resulting array is 2D or a OneDProjection if the resulting array is 1D and the sum is over both spatial axes? unit : None or `astropy.units.Unit` The unit to include for the output array. For example, `SpectralCube.max` calls ``SpectralCube.apply_numpy_function(np.max, unit=self.unit)``, inheriting the unit from the original cube. However, for other numpy functions, e.g. `numpy.argmax`, the return is an index and therefore unitless. check_endian : bool A flag to check the endianness of the data before applying the function. This is only needed for optimized functions, e.g. those in the `bottleneck `_ package. save_to_tmp_dir : bool If `True`, the computation will be carried out straight away and saved to a temporary directory. This can improve performance, especially if carrying out several operations sequentially. If `False`, the computation is only carried out when accessing specific parts of the data or writing to disk. kwargs : dict Passed to the numpy function. Returns ------- result : :class:`~spectral_cube.lower_dimensional_structures.Projection` or `~astropy.units.Quantity` or float The result depends on the value of ``axis``, ``projection``, and ``unit``. If ``axis`` is None, the return will be a scalar with or without units. If axis is an integer, the return will be a :class:`~spectral_cube.lower_dimensional_structures.Projection` if ``projection`` is set """ data = self._get_filled_data(fill=fill, check_endian=check_endian) # Numpy ufuncs know how to deal with dask arrays if function.__module__.startswith('numpy'): return function(data, **kwargs) else: # TODO: implement support for bottleneck? or arbitrary ufuncs? raise NotImplementedError() @add_save_to_tmp_dir_option def apply_function_parallel_spatial(self, function, accepts_chunks=False, **kwargs): """ Apply a function in parallel along the spatial dimension. The function will be performed on data with masked values replaced with the cube's fill value. Parameters ---------- function : function The function to apply in the spatial dimension. It must take two arguments: an array representing an image and a boolean array representing the mask. It may also accept ``**kwargs``. The function must return an object with the same shape as the input image. accepts_chunks : bool Whether the function can take chunks with shape (ns, ny, nx) where ``ns`` is the number of spectral channels in the cube and ``nx`` and ``ny`` may be greater than one. save_to_tmp_dir : bool If `True`, the computation will be carried out straight away and saved to a temporary directory. This can improve performance, especially if carrying out several operations sequentially. If `False`, the computation is only carried out when accessing specific parts of the data or writing to disk. kwargs : dict Passed to ``function`` """ if accepts_chunks: def wrapper(data_slices, **kwargs): if data_slices.size > 0: return function(data_slices, **kwargs) else: return data_slices else: def wrapper(data_slices, **kwargs): if data_slices.size > 0: out = np.zeros_like(data_slices) for index in range(data_slices.shape[0]): out[index] = function(data_slices[index], **kwargs) return out else: return data_slices # Rechunk so that there is only one chunk in the image plane return self._map_blocks_to_cube(wrapper, rechunk=('auto', -1, -1), fill=self._fill_value, **kwargs) @add_save_to_tmp_dir_option def apply_function_parallel_spectral(self, function, accepts_chunks=False, return_new_cube=True, **kwargs): """ Apply a function in parallel along the spectral dimension. The function will be performed on data with masked values replaced with the cube's fill value. Parameters ---------- function : function The function to apply in the spectral dimension. It must take two arguments: an array representing a spectrum and a boolean array representing the mask. It may also accept ``**kwargs``. The function must return an object with the same shape as the input spectrum. accepts_chunks : bool Whether the function can take chunks with shape (ns, ny, nx) where ``ns`` is the number of spectral channels in the cube and ``nx`` and ``ny`` may be greater than one. save_to_tmp_dir : bool If `True`, the computation will be carried out straight away and saved to a temporary directory. This can improve performance, especially if carrying out several operations sequentially. If `False`, the computation is only carried out when accessing specific parts of the data or writing to disk. return_new_cube : bool If `True`, a new `~SpectralCube` object will be returned. This is the default for when the function will return another version of the new spectral cube with the operation applied (for example, spectral smoothing). If `False`, an array will be returned from `function`. This is useful, for example, when fitting a model to spectra and the output is the fitted model parameters. kwargs : dict Passed to ``function`` """ # NOTE: `block_info` should always be available for `dask.array.map_blocks` to pass to # Because we use this wrapper, this always should be an available kwarg, then we check # if that kwarg should be passed to `function` _has_blockinfo = 'block_info' in inspect.signature(function).parameters # if/else to avoid an if/else in every single wrapper call. if _has_blockinfo: def wrapper(data, block_info=None, **kwargs): if data.size > 0: return function(data, block_info=block_info, **kwargs) else: return data else: def wrapper(data, **kwargs): if data.size > 0: return function(data, **kwargs) else: return data if accepts_chunks: # Check if the spectral axis is already one chunk. If it is, there is no need to rechunk the data current_chunksize = self._data.chunksize if current_chunksize[0] == self.shape[0]: rechunk = None else: rechunk = (-1, 'auto', 'auto') return self._map_blocks_to_cube(wrapper, return_new_cube=return_new_cube, rechunk=rechunk, **kwargs) else: data = self._get_filled_data(fill=self._fill_value) # apply_along_axis returns an array with a single chunk, but we # need to rechunk here to avoid issues when writing out the data # even if it results in a poorer performance. data = data.rechunk((-1, 'auto', 'auto')) newdata = da.apply_along_axis(wrapper, 0, data, shape=(self.shape[0],)) if return_new_cube: return self._new_cube_with(data=newdata, wcs=self.wcs, mask=self.mask, meta=self.meta, fill_value=self.fill_value) else: return newdata @projection_if_needed @ignore_warnings def sum(self, axis=None, **kwargs): """ Return the sum of the cube, optionally over an axis. """ return self._compute(nansum_allbadtonan(self._get_filled_data(fill=np.nan), axis=axis, **kwargs)) @projection_if_needed @ignore_warnings def mean(self, axis=None, **kwargs): """ Return the mean of the cube, optionally over an axis. """ return self._compute(da.nanmean(self._get_filled_data(fill=np.nan), axis=axis, **kwargs)) @projection_if_needed @ignore_warnings def median(self, axis=None, **kwargs): """ Return the median of the cube, optionally over an axis. """ data = self._get_filled_data(fill=np.nan) if axis is None: # da.nanmedian raises NotImplementedError since it is not possible # to do efficiently, so we use Numpy instead. self._warn_slow('median') return np.nanmedian(self._compute(data), **kwargs) else: return self._compute(da.nanmedian(self._get_filled_data(fill=np.nan), axis=axis, **kwargs)) @projection_if_needed @ignore_warnings def percentile(self, q, axis=None, **kwargs): """ Return percentiles of the data. Parameters ---------- q : float The percentile to compute axis : int, or None Which axis to compute percentiles over """ data = self._get_filled_data(fill=np.nan) if axis is None: # There is no way to compute the percentile of the whole array in # chunks. self._warn_slow('percentile') return np.nanpercentile(data, q, **kwargs) else: # Rechunk so that there is only one chunk along the desired axis data = data.rechunk([-1 if i == axis else 'auto' for i in range(3)]) return self._compute(data.map_blocks(np.nanpercentile, q=q, drop_axis=axis, axis=axis, **kwargs)) @projection_if_needed @ignore_warnings def std(self, axis=None, ddof=0, **kwargs): """ Return the mean of the cube, optionally over an axis. Other Parameters ---------------- ddof : int Means Delta Degrees of Freedom. The divisor used in calculations is ``N - ddof``, where ``N`` represents the number of elements. By default ``ddof`` is zero. """ return self._compute(da.nanstd(self._get_filled_data(fill=np.nan), axis=axis, ddof=ddof, **kwargs)) @projection_if_needed @ignore_warnings def mad_std(self, axis=None, ignore_nan=True, **kwargs): """ Use astropy's mad_std to compute the standard deviation """ data = self._get_filled_data(fill=np.nan) if axis is None: # In this case we have to load the full data - even dask's # nanmedian doesn't work efficiently over the whole array. self._warn_slow('mad_std') return stats.mad_std(data, ignore_nan=ignore_nan, **kwargs) else: # Rechunk so that there is only one chunk along the desired axis data = data.rechunk([-1 if i == axis else 'auto' for i in range(3)]) return self._compute(data.map_blocks(stats.mad_std, drop_axis=axis, axis=axis, ignore_nan=ignore_nan, **kwargs)) @projection_if_needed @ignore_warnings def max(self, axis=None, **kwargs): """ Return the maximum data value of the cube, optionally over an axis. """ return self._compute(da.nanmax(self._get_filled_data(fill=np.nan), axis=axis, **kwargs)) @projection_if_needed @ignore_warnings def min(self, axis=None, **kwargs): """ Return the minimum data value of the cube, optionally over an axis. """ return self._compute(da.nanmin(self._get_filled_data(fill=np.nan), axis=axis, **kwargs)) @ignore_warnings def argmax(self, axis=None, **kwargs): """ Return the index of the maximum data value. The return value is arbitrary if all pixels along ``axis`` are excluded from the mask. """ return self._compute(da.nanargmax(self._get_filled_data(fill=-np.inf), axis=axis, **kwargs)) @ignore_warnings def argmin(self, axis=None, **kwargs): """ Return the index of the minimum data value. The return value is arbitrary if all pixels along ``axis`` are excluded from the mask. """ return self._compute(da.nanargmin(self._get_filled_data(fill=np.inf), axis=axis)) @ignore_warnings def statistics(self): """ Return a dictinary of global basic statistics for the data. This method is designed to minimize the number of times each chunk is accessed. The statistics are computed for each chunk in turn before being aggregated. The names for each statistic are adopted from CASA's ia.statistics (see https://casa.nrao.edu/Release4.1.0/doc/CasaRef/image.statistics.html) """ data = self._get_filled_data(fill=np.nan) try: from bottleneck import nanmin, nanmax, nansum except ImportError: from numpy import nanmin, nanmax, nansum def compute_stats(chunk, *args): return np.array([np.sum(~np.isnan(chunk)), nanmin(chunk), nanmax(chunk), nansum(chunk), nansum(chunk * chunk)]) with dask.config.set(**self._scheduler_kwargs): results = da.map_blocks(compute_stats, data.reshape(-1)).compute() count_values, min_values, max_values, sum_values, ssum_values = results.reshape((-1, 5)).T stats = {'npts': count_values.sum(), 'min': min_values.min() * self._unit, 'max': max_values.max() * self._unit, 'sum': sum_values.sum() * self._unit, 'sumsq': ssum_values.sum() * self._unit ** 2} stats['mean'] = stats['sum'] / stats['npts'] # FIXME: for now this uses the simple 'textbook' algorithm which is not # numerically stable, so this should be replaced by a more robust approach stats['sigma'] = ((stats['sumsq'] - stats['sum'] ** 2 / stats['npts']) / (stats['npts'] - 1)) ** 0.5 stats['rms'] = np.sqrt(stats['sumsq'] / stats['npts']) return stats def _map_blocks_to_cube(self, function, additional_arrays=None, fill=np.nan, rechunk=None, return_new_cube=True, **kwargs): """ Call dask's map_blocks, returning a new spectral cube. """ data = self._get_filled_data(fill=fill) if rechunk is not None: data = data.rechunk(rechunk) if additional_arrays is None: newdata = da.map_blocks(function, data, dtype=data.dtype, **kwargs) else: additional_arrays = [array.rechunk(data.chunksize) for array in additional_arrays] newdata = da.map_blocks(function, data, *additional_arrays, dtype=data.dtype, **kwargs) # Create final output cube if return_new_cube: newcube = self._new_cube_with(data=newdata, wcs=self.wcs, mask=self.mask, meta=self.meta, fill_value=self.fill_value) return newcube else: return newdata # NOTE: the following three methods could also be implemented spaxel by # spaxel using apply_function_parallel_spectral but then take longer (but # less memory) @add_save_to_tmp_dir_option def sigma_clip_spectrally(self, threshold, **kwargs): """ Run astropy's sigma clipper along the spectral axis, converting all bad (excluded) values to NaN. Parameters ---------- threshold : float The ``sigma`` parameter in `astropy.stats.sigma_clip`, which refers to the number of sigma above which to cut. save_to_tmp_dir : bool If `True`, the computation will be carried out straight away and saved to a temporary directory. This can improve performance, especially if carrying out several operations sequentially. If `False`, the computation is only carried out when accessing specific parts of the data or writing to disk. kwargs : dict Passed to the sigma_clip function """ def spectral_sigma_clip(array): return stats.sigma_clip(array, sigma=threshold, axis=0, masked=False, copy=True, **kwargs) return self.apply_function_parallel_spectral(spectral_sigma_clip, accepts_chunks=True) @add_save_to_tmp_dir_option def spectral_smooth(self, kernel, convolve=convolution.convolve, **kwargs): """ Smooth the cube along the spectral dimension Note that the mask is left unchanged in this operation. Parameters ---------- kernel : `~astropy.convolution.Kernel1D` A 1D kernel from astropy convolve : function The astropy convolution function to use, either `astropy.convolution.convolve` or `astropy.convolution.convolve_fft` save_to_tmp_dir : bool If `True`, the computation will be carried out straight away and saved to a temporary directory. This can improve performance, especially if carrying out several operations sequentially. If `False`, the computation is only carried out when accessing specific parts of the data or writing to disk. kwargs : dict Passed to the convolve function """ if isinstance(kernel.array, u.Quantity): raise u.UnitsError("The convolution kernel should be defined " "without a unit.") def spectral_smooth(array): kernel_3d = kernel.array.reshape((len(kernel.array), 1, 1)) return convolve(array, kernel_3d, normalize_kernel=True) return self.apply_function_parallel_spectral(spectral_smooth, accepts_chunks=True) @add_save_to_tmp_dir_option def spectral_smooth_median(self, ksize, raise_error_jybm=True, **kwargs): """ Smooth the cube along the spectral dimension Parameters ---------- ksize : int Size of the median filter (scipy.ndimage.filters.median_filter) save_to_tmp_dir : bool If `True`, the computation will be carried out straight away and saved to a temporary directory. This can improve performance, especially if carrying out several operations sequentially. If `False`, the computation is only carried out when accessing specific parts of the data or writing to disk. kwargs : dict Not used at the moment. """ if not SCIPY_INSTALLED: raise ImportError("Scipy could not be imported: this function won't work.") if float(ksize).is_integer(): ksize = int(ksize) else: raise TypeError('ksize should be an integer (got {0})'.format(ksize)) def median_filter_wrapper(img, **kwargs): return ndimage.median_filter(img, (ksize, 1, 1), **kwargs) return self.apply_function_parallel_spectral(median_filter_wrapper, accepts_chunks=True) @add_save_to_tmp_dir_option def spatial_smooth(self, kernel, convolve=convolution.convolve, raise_error_jybm=True, **kwargs): """ Smooth the image in each spatial-spatial plane of the cube. Parameters ---------- kernel : `~astropy.convolution.Kernel2D` A 2D kernel from astropy convolve : function The astropy convolution function to use, either `astropy.convolution.convolve` or `astropy.convolution.convolve_fft` raise_error_jybm : bool, optional Raises a `~spectral_cube.utils.BeamUnitsError` when smoothing a cube in Jy/beam units, since the brightness is dependent on the spatial resolution. save_to_tmp_dir : bool If `True`, the computation will be carried out straight away and saved to a temporary directory. This can improve performance, especially if carrying out several operations sequentially. If `False`, the computation is only carried out when accessing specific parts of the data or writing to disk. kwargs : dict Passed to the convolve function """ self.check_jybeam_smoothing(raise_error_jybm=raise_error_jybm) def convolve_wrapper(data, kernel=None, **kwargs): return convolve(data, kernel, normalize_kernel=True, **kwargs) return self.apply_function_parallel_spatial(convolve_wrapper, kernel=kernel.array) @add_save_to_tmp_dir_option def spatial_smooth_median(self, ksize, raise_error_jybm=True, **kwargs): """ Smooth the image in each spatial-spatial plane of the cube using a median filter. Parameters ---------- ksize : int Size of the median filter (scipy.ndimage.filters.median_filter) raise_error_jybm : bool, optional Raises a `~spectral_cube.utils.BeamUnitsError` when smoothing a cube in Jy/beam units, since the brightness is dependent on the spatial resolution. kwargs : dict Passed to the median_filter function """ if not SCIPY_INSTALLED: raise ImportError("Scipy could not be imported: this function won't work.") self.check_jybeam_smoothing(raise_error_jybm=raise_error_jybm) def median_filter_wrapper(data, ksize=None, **kwargs): return ndimage.median_filter(data, ksize, **kwargs) return self.apply_function_parallel_spatial(median_filter_wrapper, ksize=ksize) def moment(self, order=0, axis=0, **kwargs): """ Compute moments along the spectral axis. Moments are defined as follows: Moment 0: .. math:: M_0 \\int I dl Moment 1: .. math:: M_1 = \\frac{\\int I l dl}{M_0} Moment N: .. math:: M_N = \\frac{\\int I (l - M_1)^N dl}{M_0} .. warning:: Note that these follow the mathematical definitions of moments, and therefore the second moment will return a variance map. To get linewidth maps, you can instead use the :meth:`~SpectralCube.linewidth_fwhm` or :meth:`~SpectralCube.linewidth_sigma` methods. Parameters ---------- order : int The order of the moment to take. Default=0 axis : int The axis along which to compute the moment. Default=0 Returns ------- map [, wcs] The moment map (numpy array) and, if wcs=True, the WCS object describing the map Notes ----- For the first moment, the result for axis=1, 2 is the angular offset *relative to the cube face*. For axis=0, it is the *absolute* velocity/frequency of the first moment. """ if axis == 0 and order == 2: warnings.warn("Note that the second moment returned will be a " "variance map. To get a linewidth map, use the " "SpectralCube.linewidth_fwhm() or " "SpectralCube.linewidth_sigma() methods instead.", VarianceWarning) data = self._get_filled_data(fill=np.nan).astype(np.float64) pix_size = self._pix_size_slice(axis) pix_cen = self._pix_cen()[axis] if order == 0: out = nansum_allbadtonan(data * pix_size, axis=axis) else: denominator = self._compute(nansum_allbadtonan(data * pix_size, axis=axis)) mom1 = (nansum_allbadtonan(data * pix_size * pix_cen, axis=axis) / denominator) if order > 1: # insert an axis so it broadcasts properly shp = list(mom1.shape) shp.insert(axis, 1) mom1 = self._compute(mom1.reshape(shp)) out = (nansum_allbadtonan(data * pix_size * (pix_cen - mom1) ** order, axis=axis) / denominator) else: out = mom1 # force computation, and convert back to original dtype (but native) out = self._compute(out) # apply units if order == 0: if axis == 0 and self._spectral_unit is not None: axunit = unit = self._spectral_unit else: axunit = unit = u.Unit(self._wcs.wcs.cunit[np2wcs[axis]]) out = u.Quantity(out, self.unit * axunit, copy=False) else: if axis == 0 and self._spectral_unit is not None: unit = self._spectral_unit ** max(order, 1) else: unit = u.Unit(self._wcs.wcs.cunit[np2wcs[axis]]) ** max(order, 1) out = u.Quantity(out, unit, copy=False) # special case: for order=1, axis=0, you usually want # the absolute velocity and not the offset if order == 1 and axis == 0: out += self.world[0, :, :][0] new_wcs = wcs_utils.drop_axis(self._wcs, np2wcs[axis]) meta = {'moment_order': order, 'moment_axis': axis} meta.update(self._meta) return Projection(out, copy=False, wcs=new_wcs, meta=meta, header=self._nowcs_header) def subcube_slices_from_mask(self, region_mask, spatial_only=False): """ Given a mask, return the slices corresponding to the minimum subcube that encloses the mask Parameters ---------- region_mask: `~spectral_cube.masks.MaskBase` or boolean `numpy.ndarray` The mask with appropriate WCS or an ndarray with matched coordinates spatial_only: bool Return only slices that affect the spatial dimensions; the spectral dimension will be left unchanged """ # We need to use a slightly different approach to SpectralCube here # because there isn't yet a dask-friendly version of find_objects # https://github.com/dask/dask-image/issues/96 if isinstance(region_mask, np.ndarray): if is_broadcastable_and_smaller(region_mask.shape, self.shape): region_mask = BooleanArrayMask(region_mask, self._wcs) else: raise ValueError("Mask shape does not match cube shape.") include = region_mask.include(self._data, self._wcs, wcs_tolerance=self._wcs_tolerance) include = da.broadcast_to(include, self.shape) slices = [] for axis in range(3): if axis == 0 and spatial_only: slices.append(slice(None)) continue collapse_axes = tuple(index for index in range(3) if index != axis) valid = self._compute(da.any(include, axis=collapse_axes)) if np.any(valid): indices = np.where(valid)[0] slices.append(slice(np.min(indices), np.max(indices) + 1)) else: slices.append(slice(0)) return tuple(slices) @add_save_to_tmp_dir_option def downsample_axis(self, factor, axis, estimator=np.nanmean, truncate=False): """ Downsample the cube by averaging over *factor* pixels along an axis. Crops right side if the shape is not a multiple of factor. The WCS will be 'downsampled' by the specified factor as well. If the downsample factor is odd, there will be an offset in the WCS. There is both an in-memory and a memory-mapped implementation; the default is to use the memory-mapped version. Technically, the 'large data' warning doesn't apply when using the memory-mapped version, but the warning is still there anyway. Parameters ---------- myarr : `~numpy.ndarray` The array to downsample factor : int The factor to downsample by axis : int The axis to downsample along estimator : function defaults to mean. You can downsample by summing or something else if you want a different estimator (e.g., downsampling error: you want to sum & divide by sqrt(n)) truncate : bool Whether to truncate the last chunk or average over a smaller number. e.g., if you downsample [1,2,3,4] by a factor of 3, you could get either [2] or [2,4] if truncate is True or False, respectively. save_to_tmp_dir : bool If `True`, the computation will be carried out straight away and saved to a temporary directory. This can improve performance, especially if carrying out several operations sequentially. If `False`, the computation is only carried out when accessing specific parts of the data or writing to disk. """ # FIXME: this does not work correctly currently due to # https://github.com/dask/dask/issues/6102 warnings.warn('In some cases, the final shape of the output from downsample_axis ' 'is incorrect, so use the result with caution', UserWarning) data = self._get_filled_data(fill=self._fill_value) mask = da.asarray(self.mask.include(), name=str(uuid.uuid4())) if not truncate and data.shape[axis] % factor != 0: padding_shape = list(data.shape) padding_shape[axis] = factor - data.shape[axis] % factor data_padding = da.ones(padding_shape) * np.nan mask_padding = da.zeros(padding_shape, dtype=bool) data = da.concatenate([data, data_padding], axis=axis) mask = da.concatenate([mask, mask_padding], axis=axis).rechunk() data = da.coarsen(estimator, data, {axis: factor}, trim_excess=True) mask = da.coarsen(estimator, mask, {axis: factor}, trim_excess=True) view = [slice(None, None, factor) if ii == axis else slice(None) for ii in range(self.ndim)] newwcs = wcs_utils.slice_wcs(self.wcs, view, shape=self.shape) newwcs._naxis = list(self.shape) # this is an assertion to ensure that the WCS produced is valid # (this is basically a regression test for #442) assert newwcs[:, slice(None), slice(None)] assert len(newwcs._naxis) == 3 return self._new_cube_with(data=data, wcs=newwcs, mask=BooleanArrayMask(mask, wcs=newwcs)) @add_save_to_tmp_dir_option def spectral_interpolate(self, spectral_grid, suppress_smooth_warning=False, fill_value=None): """Resample the cube spectrally onto a specific grid Parameters ---------- spectral_grid : array An array of the spectral positions to regrid onto suppress_smooth_warning : bool If disabled, a warning will be raised when interpolating onto a grid that does not nyquist sample the existing grid. Disable this if you have already appropriately smoothed the data. fill_value : float Value for extrapolated spectral values that lie outside of the spectral range defined in the original data. The default is to use the nearest spectral channel in the cube. save_to_tmp_dir : bool If `True`, the computation will be carried out straight away and saved to a temporary directory. This can improve performance, especially if carrying out several operations sequentially. If `False`, the computation is only carried out when accessing specific parts of the data or writing to disk. Returns ------- cube : SpectralCube """ # TODO: this duplicates SpectralCube.spectral_interpolate, so we should # find a way to avoid that duplication. inaxis = self.spectral_axis.to(spectral_grid.unit) indiff = np.mean(np.diff(inaxis)) outdiff = np.mean(np.diff(spectral_grid)) reverse_in = indiff < 0 reverse_out = outdiff < 0 # account for reversed axes if reverse_in: inaxis = inaxis[::-1] indiff = np.mean(np.diff(inaxis)) if reverse_out: spectral_grid = spectral_grid[::-1] outdiff = np.mean(np.diff(spectral_grid)) cubedata = self._get_filled_data(fill=np.nan) # insanity checks if indiff < 0 or outdiff < 0: raise ValueError("impossible.") assert np.all(np.diff(spectral_grid) > 0) assert np.all(np.diff(inaxis) > 0) np.testing.assert_allclose(np.diff(spectral_grid), outdiff, err_msg="Output grid must be linear") if outdiff > 2 * indiff and not suppress_smooth_warning: warnings.warn("Input grid has too small a spacing. The data should " "be smoothed prior to resampling.", SmoothingWarning) if reverse_in: cubedata = cubedata[::-1, :, :] cubedata = cubedata.rechunk((-1, 'auto', 'auto')) chunkshape = (len(spectral_grid),) + cubedata.chunksize[1:] def interp_wrapper(y, args): if y.size == 1: return y else: interp = scipy.interpolate.interp1d(args[1], y.T, fill_value=fill_value, bounds_error=False) return interp(args[0]).T newcube = cubedata.map_blocks(interp_wrapper, args=(spectral_grid.value, inaxis.value), chunks=chunkshape) newwcs = self.wcs.deepcopy() newwcs.wcs.crpix[2] = 1 newwcs.wcs.crval[2] = spectral_grid[0].value if not reverse_out \ else spectral_grid[-1].value newwcs.wcs.cunit[2] = spectral_grid.unit.to_string('FITS') newwcs.wcs.cdelt[2] = outdiff.value if not reverse_out \ else -outdiff.value newwcs.wcs.set() newbmask = BooleanArrayMask(~np.isnan(newcube), wcs=newwcs) if reverse_out: newcube = newcube[::-1, :, :] newcube = self._new_cube_with(data=newcube, wcs=newwcs, mask=newbmask, meta=self.meta, fill_value=self.fill_value) return newcube class DaskSpectralCube(DaskSpectralCubeMixin, SpectralCube): def __init__(self, data, *args, **kwargs): unit = None if not isinstance(data, da.Array): if isinstance(data, u.Quantity): data, unit = data.value, data.unit # NOTE: don't be tempted to chunk this image-wise (following the # data storage) because spectral operations will take forever. data = da.asarray(data, name=str(uuid.uuid4())) super().__init__(data, *args, **kwargs) if self._unit is None and unit is not None: self._unit = unit @classmethod def read(cls, *args, **kwargs): if kwargs.get('use_dask') is None: kwargs['use_dask'] = True return super().read(*args, **kwargs) def write(self, *args, **kwargs): with dask.config.set(**self._scheduler_kwargs): super().write(*args, **kwargs) @property def hdu(self): """ HDU version of self """ return PrimaryHDU(self._get_filled_data(fill=self._fill_value), header=self.header) @property def hdulist(self): return HDUList(self.hdu) @add_save_to_tmp_dir_option def convolve_to(self, beam, convolve=convolution.convolve, **kwargs): """ Convolve each channel in the cube to a specified beam Parameters ---------- beam : `radio_beam.Beam` The beam to convolve to convolve : function The astropy convolution function to use, either `astropy.convolution.convolve` or `astropy.convolution.convolve_fft` save_to_tmp_dir : bool If `True`, the computation will be carried out straight away and saved to a temporary directory. This can improve performance, especially if carrying out several operations sequentially. If `False`, the computation is only carried out when accessing specific parts of the data or writing to disk. kwargs : dict Keyword arguments to pass to the convolution function Returns ------- cube : `SpectralCube` A SpectralCube with a single ``beam`` """ # Check if the beams are the same. if beam == self.beam: warnings.warn("The given beam is identical to the current beam. " "Skipping convolution.") return self pixscale = proj_plane_pixel_area(self.wcs.celestial)**0.5 * u.deg convolution_kernel = beam.deconvolve(self.beam).as_kernel(pixscale) kernel = convolution_kernel.array.reshape((1,) + convolution_kernel.array.shape) if self.unit.is_equivalent(u.Jy / u.beam): beam_ratio_factor = (beam.sr / self.beam.sr).value else: beam_ratio_factor = 1. # See #631: kwargs get passed within self.apply_function_parallel_spatial def convfunc(img, **kwargs): return convolve(img, kernel, normalize_kernel=True, **kwargs).reshape(img.shape) * beam_ratio_factor return self.apply_function_parallel_spatial(convfunc, accepts_chunks=True, **kwargs).with_beam(beam, raise_error_jybm=False) class DaskVaryingResolutionSpectralCube(DaskSpectralCubeMixin, VaryingResolutionSpectralCube): def __init__(self, data, *args, **kwargs): unit = None if not isinstance(data, da.Array): if isinstance(data, u.Quantity): data, unit = data.value, data.unit # NOTE: don't be tempted to chunk this image-wise (following the # data storage) because spectral operations will take forever. data = da.asarray(data, name=str(uuid.uuid4())) super().__init__(data, *args, **kwargs) if self._unit is None and unit is not None: self._unit = unit @classmethod def read(cls, *args, **kwargs): if kwargs.get('use_dask') is None: kwargs['use_dask'] = True return super().read(*args, **kwargs) def write(self, *args, **kwargs): with dask.config.set(**self._scheduler_kwargs): super().write(*args, **kwargs) @property def hdu(self): raise ValueError("For DaskVaryingResolutionSpectralCube's, use hdulist " "instead of hdu.") @property def hdulist(self): """ HDUList version of self """ hdu = PrimaryHDU(self._get_filled_data(fill=self._fill_value), header=self.header) from .cube_utils import beams_to_bintable # use unmasked beams because, even if the beam is masked out, we should # write it bmhdu = beams_to_bintable(self.unmasked_beams) return HDUList([hdu, bmhdu]) @add_save_to_tmp_dir_option def convolve_to(self, beam, allow_smaller=False, convolve=convolution.convolve_fft, **kwargs): """ Convolve each channel in the cube to a specified beam .. warning:: The current implementation of ``convolve_to`` creates an in-memory copy of the whole cube to store the convolved data. Issue #506 notes that this is a problem, and it is on our to-do list to fix. .. warning:: Note that if there is any misaligment between the cube's spatial pixel axes and the WCS's spatial axes *and* the beams are not round, the convolution kernels used here may be incorrect. Be wary in such cases! Parameters ---------- beam : `radio_beam.Beam` The beam to convolve to allow_smaller : bool If the specified target beam is smaller than the beam in a channel in any dimension and this is ``False``, it will raise an exception. convolve : function The astropy convolution function to use, either `astropy.convolution.convolve` or `astropy.convolution.convolve_fft` save_to_tmp_dir : bool If `True`, the computation will be carried out straight away and saved to a temporary directory. This can improve performance, especially if carrying out several operations sequentially. If `False`, the computation is only carried out when accessing specific parts of the data or writing to disk. Returns ------- cube : `SpectralCube` A SpectralCube with a single ``beam`` """ if ((self.wcs.celestial.wcs.get_pc()[0,1] != 0 or self.wcs.celestial.wcs.get_pc()[1,0] != 0)): warnings.warn("The beams will produce convolution kernels " "that are not aware of any misaligment " "between pixel and world coordinates, " "and there are off-diagonal elements of the " "WCS spatial transformation matrix. " "Unexpected results are likely.", BeamWarning ) pixscale = wcs.utils.proj_plane_pixel_area(self.wcs.celestial)**0.5*u.deg beams = [] beam_ratio_factors = [] for bm, valid in zip(self.unmasked_beams, self.goodbeams_mask): if not valid: # just skip masked-out beams beams.append(None) beam_ratio_factors.append(None) continue elif beam == bm: # Point response when beams are equal, don't convolve. beams.append(None) beam_ratio_factors.append(None) continue try: beams.append(beam.deconvolve(bm)) beam_ratio_factors.append((beam.sr / bm.sr).value) except ValueError: if allow_smaller: beams.append(None) beam_ratio_factors.append(None) else: raise # We need to pass in the beams to dask, so we hide them inside an object array # that can then be chunked like the data. beams = da.from_array(np.array(beams, dtype=np.object) .reshape((len(beams), 1, 1)), chunks=(-1, -1, -1)) needs_beam_ratio = self.unit.is_equivalent(u.Jy / u.beam) # See #631: kwargs get passed within self.apply_function_parallel_spatial def convfunc(img, beam, **kwargs): if img.size > 0: out = np.zeros(img.shape, dtype=img.dtype) for index in range(img.shape[0]): if beam[index, 0, 0] is None: out[index] = img[index] else: kernel = beam[index, 0, 0].as_kernel(pixscale) out[index] = convolve(img[index], kernel, normalize_kernel=True, **kwargs) if needs_beam_ratio: out[index] *= beam_ratio_factors[index] return out else: return img # Rechunk so that there is only one chunk in the image plane cube = self._map_blocks_to_cube(convfunc, additional_arrays=(beams,), rechunk=('auto', -1, -1), **kwargs) # Result above is a DaskVaryingResolutionSpectralCube, convert to DaskSpectralCube newcube = DaskSpectralCube(data=cube._data, beam=beam, wcs=cube.wcs, mask=cube.mask, meta=cube.meta, fill_value=cube.fill_value) newcube._scheduler_kwargs = self._scheduler_kwargs return newcube def spectral_interpolate(self, *args, **kwargs): raise AttributeError("VaryingResolutionSpectralCubes can't be " "spectrally interpolated. Convolve to a " "common resolution with `convolve_to` before " "attempting spectral interpolation.") def spectral_smooth(self, *args, **kwargs): raise AttributeError("VaryingResolutionSpectralCubes can't be " "spectrally smoothed. Convolve to a " "common resolution with `convolve_to` before " "attempting spectral smoothed.") @property def _mask_include(self): return BooleanArrayMask(da.from_array(MaskHandler(self), name='MaskHandler ' + str(uuid.uuid4()), chunks=self._data.chunksize), wcs=self.wcs, shape=self.shape) ././@PaxHeader0000000000000000000000000000003200000000000010210 xustar0026 mtime=1633017864.73703 spectral-cube-0.6.0/spectral_cube/io/0000755000175100001710000000000000000000000017030 5ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1633017852.0 spectral-cube-0.6.0/spectral_cube/io/__init__.py0000644000175100001710000000010100000000000021131 0ustar00runnerdockerfrom __future__ import print_function, absolute_import, division ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1633017852.0 spectral-cube-0.6.0/spectral_cube/io/casa_image.py0000644000175100001710000002117700000000000021463 0ustar00runnerdockerfrom __future__ import print_function, absolute_import, division import warnings from astropy import units as u from astropy.io import registry as io_registry from radio_beam import Beam, Beams from .. import DaskSpectralCube, StokesSpectralCube, BooleanArrayMask, DaskVaryingResolutionSpectralCube from ..spectral_cube import BaseSpectralCube from .. import cube_utils from .. utils import BeamWarning from .. import wcs_utils from casa_formats_io import getdesc, coordsys_to_astropy_wcs, image_to_dask # Read and write from a CASA image. This has a few # complications. First, by default CASA does not return the # "python order" and so we either have to transpose the cube on # read or have dueling conventions. Second, CASA often has # degenerate stokes axes present in unpredictable places (3rd or # 4th without a clear expectation). We need to replicate these # when writing but don't want them in memory. By default, try to # yield the same array in memory that we would get from astropy. def is_casa_image(origin, filepath, fileobj, *args, **kwargs): # See note before StringWrapper definition from .core import StringWrapper if filepath is None and len(args) > 0: if isinstance(args[0], StringWrapper): filepath = args[0].value elif isinstance(args[0], str): filepath = args[0] return filepath is not None and filepath.lower().endswith('.image') def load_casa_image(filename, skipdata=False, memmap=True, skipvalid=False, skipcs=False, target_cls=None, use_dask=None, target_chunksize=None, **kwargs): """ Load a cube (into memory?) from a CASA image. By default it will transpose the cube into a 'python' order and drop degenerate axes. These options can be suppressed. The object holds the coordsys object from the image in memory. """ if use_dask is None: use_dask = True if not use_dask: raise ValueError("Loading CASA datasets is not possible with use_dask=False") from .core import StringWrapper if isinstance(filename, StringWrapper): filename = filename.value # read in the data if not skipdata: data = image_to_dask(filename, memmap=memmap, target_chunksize=target_chunksize) # CASA stores validity of data as a mask if skipvalid: valid = None else: try: valid = image_to_dask(filename, memmap=memmap, mask=True, target_chunksize=target_chunksize) except FileNotFoundError: valid = None # transpose is dealt with within the cube object # read in coordinate system object desc = getdesc(filename) casa_cs = desc['_keywords_']['coords'] if 'units' in desc['_keywords_']: unit = desc['_keywords_']['units'] else: unit = '' imageinfo = desc['_keywords_']['imageinfo'] if 'perplanebeams' in imageinfo: beam_ = {'beams': imageinfo['perplanebeams']} beam_['nStokes'] = beam_['beams'].pop('nStokes') beam_['nChannels'] = beam_['beams'].pop('nChannels') beam_['beams'] = {key: {'*0': value} for key, value in list(beam_['beams'].items())} elif 'restoringbeam' in imageinfo: beam_ = imageinfo['restoringbeam'] else: beam_ = {} wcs = coordsys_to_astropy_wcs(casa_cs) del casa_cs if 'major' in beam_: beam = Beam(major=u.Quantity(beam_['major']['value'], unit=beam_['major']['unit']), minor=u.Quantity(beam_['minor']['value'], unit=beam_['minor']['unit']), pa=u.Quantity(beam_['positionangle']['value'], unit=beam_['positionangle']['unit']), ) elif 'beams' in beam_: bdict = beam_['beams'] if beam_['nStokes'] > 1: raise NotImplementedError() nbeams = len(bdict) assert nbeams == beam_['nChannels'] stokesidx = '*0' majors = [u.Quantity(bdict['*{0}'.format(ii)][stokesidx]['major']['value'], bdict['*{0}'.format(ii)][stokesidx]['major']['unit']) for ii in range(nbeams)] minors = [u.Quantity(bdict['*{0}'.format(ii)][stokesidx]['minor']['value'], bdict['*{0}'.format(ii)][stokesidx]['minor']['unit']) for ii in range(nbeams)] pas = [u.Quantity(bdict['*{0}'.format(ii)][stokesidx]['positionangle']['value'], bdict['*{0}'.format(ii)][stokesidx]['positionangle']['unit']) for ii in range(nbeams)] beams = Beams(major=u.Quantity(majors), minor=u.Quantity(minors), pa=u.Quantity(pas)) else: warnings.warn("No beam information found in CASA image.", BeamWarning) # don't need this yet # stokes = get_casa_axis(temp_cs, wanttype="Stokes", skipdeg=False,) # if stokes == None: # order = np.arange(self.data.ndim) # else: # order = [] # for ax in np.arange(self.data.ndim+1): # if ax == stokes: # continue # order.append(ax) # self.casa_cs = ia.coordsys(order) # This should work, but coordsys.reorder() has a bug # on the error checking. JIRA filed. Until then the # axes will be reversed from the original. # if transpose == True: # new_order = np.arange(self.data.ndim) # new_order = new_order[-1*np.arange(self.data.ndim)-1] # print new_order # self.casa_cs.reorder(new_order) meta = {'filename': filename, 'BUNIT': unit} if wcs.naxis == 3: data, wcs_slice = cube_utils._orient(data, wcs) if valid is not None: valid, _ = cube_utils._orient(valid, wcs) mask = BooleanArrayMask(valid, wcs_slice) else: mask = None if 'beam' in locals(): cube = DaskSpectralCube(data, wcs_slice, mask, meta=meta, beam=beam) elif 'beams' in locals(): cube = DaskVaryingResolutionSpectralCube(data, wcs_slice, mask, meta=meta, beams=beams) else: cube = DaskSpectralCube(data, wcs_slice, mask, meta=meta) # with #592, this is no longer true # we've already loaded the cube into memory because of CASA # limitations, so there's no reason to disallow operations # cube.allow_huge_operations = True if mask is not None: assert cube.mask.shape == cube.shape elif wcs.naxis == 4: if valid is not None: valid, _ = cube_utils._split_stokes(valid, wcs) data, wcs = cube_utils._split_stokes(data, wcs) mask = {} for component in data: data_, wcs_slice = cube_utils._orient(data[component], wcs) if valid is not None: valid_, _ = cube_utils._orient(valid[component], wcs) mask[component] = BooleanArrayMask(valid_, wcs_slice) else: mask[component] = None if 'beam' in locals(): data[component] = DaskSpectralCube(data_, wcs_slice, mask[component], meta=meta, beam=beam) elif 'beams' in locals(): data[component] = DaskVaryingResolutionSpectralCube(data_, wcs_slice, mask[component], meta=meta, beams=beams) else: data[component] = DaskSpectralCube(data_, wcs_slice, mask[component], meta=meta) data[component].allow_huge_operations = True cube = StokesSpectralCube(stokes_data=data) if mask['I'] is not None: assert cube.I.mask.shape == cube.shape assert wcs_utils.check_equality(cube.I.mask._wcs, cube.wcs) else: raise ValueError("CASA image has {0} dimensions, and therefore " "is not readable by spectral-cube.".format(wcs.naxis)) from .core import normalize_cube_stokes return normalize_cube_stokes(cube, target_cls=target_cls) io_registry.register_reader('casa', BaseSpectralCube, load_casa_image) io_registry.register_reader('casa_image', BaseSpectralCube, load_casa_image) io_registry.register_identifier('casa', BaseSpectralCube, is_casa_image) io_registry.register_reader('casa', StokesSpectralCube, load_casa_image) io_registry.register_reader('casa_image', StokesSpectralCube, load_casa_image) io_registry.register_identifier('casa', StokesSpectralCube, is_casa_image) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1633017852.0 spectral-cube-0.6.0/spectral_cube/io/casa_masks.py0000644000175100001710000000714500000000000021516 0ustar00runnerdockerfrom __future__ import print_function, absolute_import, division import numpy as np from astropy.io import fits import tempfile import os from ..wcs_utils import add_stokes_axis_to_wcs __all__ = ['make_casa_mask'] def make_casa_mask(SpecCube, outname, append_to_image=True, img=None, add_stokes=True, stokes_posn=None, overwrite=False ): ''' Outputs the mask attached to the SpectralCube object as a CASA image, or optionally appends the mask to a preexisting CASA image. Parameters ---------- SpecCube : SpectralCube SpectralCube object containing mask. outname : str Name of the outputted mask file. append_to_image : bool, optional Appends the mask to a given image. img : str, optional Image to be appended to. Must be specified if append_to_image is enabled. add_stokes: bool, optional Adds a Stokes axis onto the wcs from SpecCube. stokes_posn : int, optional Sets the position of the new Stokes axis. Defaults to the last axis. overwrite : bool, optional Overwrite the image and mask files if they exist? ''' try: from casatools import image ia = image() except ImportError: try: from taskinit import ia except ImportError: raise ImportError("Cannot import casa. Must be run in a CASA environment.") # the 'mask name' is distinct from the mask _path_ maskname = os.path.split(outname)[1] maskpath = outname # Get the header info from the image # There's not wcs_astropy2casa (yet), so create a temporary file for # CASA to open. temp = tempfile.NamedTemporaryFile() # CASA is closing this file at some point so set it to manual delete. temp2 = tempfile.NamedTemporaryFile(delete=False) # Grab wcs # Optionally re-add on the Stokes axis if add_stokes: my_wcs = SpecCube.wcs if stokes_posn is None: stokes_posn = my_wcs.wcs.naxis new_wcs = add_stokes_axis_to_wcs(my_wcs, stokes_posn) header = new_wcs.to_header() # Transpose the shape so we're adding the axis at the place CASA will # recognize. Then transpose back. shape = SpecCube.shape[::-1] shape = shape[:stokes_posn] + (1,) + shape[stokes_posn:] shape = shape[::-1] else: # Just grab the header from SpecCube header = SpecCube.header shape = SpecCube.shape hdu = fits.PrimaryHDU(header=header, data=np.empty(shape, dtype='int16')) hdu.writeto(temp.name) ia.fromfits(infile=temp.name, outfile=temp2.name, overwrite=overwrite) temp.close() cs = ia.coordsys() ia.done() ia.close() temp2.close() mask_arr = SpecCube.mask.include() # Reshape mask with possible Stokes axis mask_arr = mask_arr.reshape(shape) # Transpose to match CASA axes mask_arr = mask_arr.T ia.fromarray(outfile=maskpath, pixels=mask_arr.astype('int16'), overwrite=overwrite) ia.done() ia.close() ia.open(maskpath, cache=False) ia.setcoordsys(cs.torecord()) ia.done() ia.close() if append_to_image: if img is None: raise TypeError("img argument must be specified to append the mask.") ia.open(maskpath, cache=False) ia.calcmask(maskname+">0.5") ia.done() ia.close() ia.open(img, cache=False) ia.maskhandler('copy', [maskpath+":mask0", maskname]) ia.maskhandler('set', maskname) ia.done() ia.close() ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1633017852.0 spectral-cube-0.6.0/spectral_cube/io/class_lmv.py0000644000175100001710000007175200000000000021401 0ustar00runnerdockerfrom __future__ import print_function, absolute_import, division import six import numpy as np import struct import warnings import string from astropy import log from astropy.io import registry as io_registry from ..spectral_cube import BaseSpectralCube from .fits import load_fits_cube """ .. TODO:: When any section length is zero, that means the following values are to be ignored. No warning is needed. """ # Constant: r2deg = 180/np.pi # see sicfits.f90 _ctype_dict={'LII':'GLON', 'BII':'GLAT', 'VELOCITY':'VELO', 'RA':'RA', 'DEC':'DEC', 'FREQUENCY': 'FREQ', } _cunit_dict = {'LII':'deg', 'BII':'deg', 'VELOCITY':'km s-1', 'RA':'deg', 'DEC':'deg', 'FREQUENCY': 'MHz', } cel_types = ('RA','DEC','GLON','GLAT') # CLASS apparently defaults to an ARC (zenithal equidistant) projection; this # is what is output in case the projection # is zero when exporting from CLASS _proj_dict = {0:'ARC', 1:'TAN', 2:'SIN', 3:'AZP', 4:'STG', 5:'ZEA', 6:'AIT', 7:'GLS', 8:'SFL', } _bunit_dict = {'k (tmb)': 'K'} def is_lmv(origin, filepath, fileobj, *args, **kwargs): """ Determine whether input is in GILDAS CLASS lmv format """ return filepath is not None and filepath.lower().endswith('.lmv') def read_lmv(lf): """ Read an LMV cube file Specification is primarily in GILDAS image_def.f90 """ log.warning("CLASS LMV cube reading is tentatively supported. " "Please post bug reports at the first sign of danger!") # lf for "LMV File" filetype = _read_string(lf, 12) #!--------------------------------------------------------------------- #! @ private #! SYCODE system code #! '-' IEEE #! '.' EEEI (IBM like) #! '_' VAX #! IMCODE file code #! '<' IEEE 64 bits (Little Endian, 99.9 % of recent computers) #! '>' EEEI 64 bits (Big Endian, HPUX, IBM-RISC, and SPARC ...) #!--------------------------------------------------------------------- imcode = filetype[6] if filetype[:6] != 'GILDAS' or filetype[7:] != 'IMAGE': raise TypeError("File is not a GILDAS Image file") if imcode in ('<','>'): if imcode =='>': log.warning("Swap the endianness first...") return read_lmv_type2(lf) else: return read_lmv_type1(lf) def read_lmv_type1(lf): header = {} # fmt probably matters! Default is "r4", i.e. float32 data, but could be float64 fmt = np.fromfile(lf, dtype='int32', count=1) # 4 # number of data blocks ndb = np.fromfile(lf, dtype='int32', count=1) # 5 gdf_type = np.fromfile(lf, dtype='int32', count=1) # 6 # Reserved Space reserved_fill = np.fromfile(lf, dtype='int32', count=4) # 7 general_section_length = np.fromfile(lf, dtype='int32', count=1) # 11 #print "Format: ",fmt," ndb: ",ndb, " fill: ",fill," other: ",unknown # pos 12 naxis,naxis1,naxis2,naxis3,naxis4 = np.fromfile(lf,count=5,dtype='int32') header['NAXIS'] = naxis header['NAXIS1'] = naxis1 header['NAXIS2'] = naxis2 header['NAXIS3'] = naxis3 header['NAXIS4'] = naxis4 # We are indexing bytes from here; CLASS indices are higher by 12 # pos 17 header['CRPIX1'] = np.fromfile(lf,count=1,dtype='float64')[0] header['CRVAL1'] = np.fromfile(lf,count=1,dtype='float64')[0] header['CDELT1'] = np.fromfile(lf,count=1,dtype='float64')[0] * r2deg header['CRPIX2'] = np.fromfile(lf,count=1,dtype='float64')[0] header['CRVAL2'] = np.fromfile(lf,count=1,dtype='float64')[0] header['CDELT2'] = np.fromfile(lf,count=1,dtype='float64')[0] * r2deg header['CRPIX3'] = np.fromfile(lf,count=1,dtype='float64')[0] header['CRVAL3'] = np.fromfile(lf,count=1,dtype='float64')[0] header['CDELT3'] = np.fromfile(lf,count=1,dtype='float64')[0] header['CRPIX4'] = np.fromfile(lf,count=1,dtype='float64')[0] header['CRVAL4'] = np.fromfile(lf,count=1,dtype='float64')[0] header['CDELT4'] = np.fromfile(lf,count=1,dtype='float64')[0] # pos 41 #print "Post-crval",lf.tell() blank_section_length = np.fromfile(lf,count=1,dtype='int32') if blank_section_length != 8: warnings.warn("Invalid section length found for blanking section") bval = np.fromfile(lf,count=1,dtype='float32')[0] # 42 header['TOLERANC'] = np.fromfile(lf,count=1,dtype='int32')[0] # 43 eval = tolerance extrema_section_length = np.fromfile(lf,count=1,dtype='int32')[0] # 44 if extrema_section_length != 40: warnings.warn("Invalid section length found for extrema section") vmin,vmax = np.fromfile(lf,count=2,dtype='float32') # 45 xmin,xmax,ymin,ymax,zmin,zmax = np.fromfile(lf,count=6,dtype='int32') # 47 wmin,wmax = np.fromfile(lf,count=2,dtype='int32') # 53 description_section_length = np.fromfile(lf,count=1,dtype='int32')[0] # 55 if description_section_length != 72: warnings.warn("Invalid section length found for description section") #strings = lf.read(description_section_length) # 56 header['BUNIT'] = _read_string(lf, 12) # 56 header['CTYPE1'] = _read_string(lf, 12) # 59 header['CTYPE2'] = _read_string(lf, 12) # 62 header['CTYPE3'] = _read_string(lf, 12) # 65 header['CTYPE4'] = _read_string(lf, 12) # 68 header['CUNIT1'] = _cunit_dict[header['CTYPE1'].strip()] header['CUNIT2'] = _cunit_dict[header['CTYPE2'].strip()] header['CUNIT3'] = _cunit_dict[header['CTYPE3'].strip()] header['COOSYS'] = _read_string(lf, 12) # 71 position_section_length = np.fromfile(lf,count=1,dtype='int32') # 74 if position_section_length != 48: warnings.warn("Invalid section length found for position section") header['OBJNAME'] = _read_string(lf, 4*3) # 75 header['RA'] = np.fromfile(lf, count=1, dtype='float64')[0] * r2deg # 78 header['DEC'] = np.fromfile(lf, count=1, dtype='float64')[0] * r2deg # 80 header['GLON'] = np.fromfile(lf, count=1, dtype='float64')[0] * r2deg # 82 header['GLAT'] = np.fromfile(lf, count=1, dtype='float64')[0] * r2deg # 84 header['EQUINOX'] = np.fromfile(lf,count=1,dtype='float32')[0] # 86 header['PROJWORD'] = _read_string(lf, 4) # 87 header['PTYP'] = np.fromfile(lf,count=1,dtype='int32')[0] # 88 header['A0'] = np.fromfile(lf,count=1,dtype='float64')[0] # 89 header['D0'] = np.fromfile(lf,count=1,dtype='float64')[0] # 91 header['PANG'] = np.fromfile(lf,count=1,dtype='float64')[0] # 93 header['XAXI'] = np.fromfile(lf,count=1,dtype='float32')[0] # 95 header['YAXI'] = np.fromfile(lf,count=1,dtype='float32')[0] # 96 spectroscopy_section_length = np.fromfile(lf,count=1,dtype='int32') # 97 if spectroscopy_section_length != 48: warnings.warn("Invalid section length found for spectroscopy section") header['RECVR'] = _read_string(lf, 12) # 98 header['FRES'] = np.fromfile(lf,count=1,dtype='float64')[0] # 101 header['IMAGFREQ'] = np.fromfile(lf,count=1,dtype='float64')[0] # 103 "FIMA" header['REFFREQ'] = np.fromfile(lf,count=1,dtype='float64')[0] # 105 header['VRES'] = np.fromfile(lf,count=1,dtype='float32')[0] # 107 header['VOFF'] = np.fromfile(lf,count=1,dtype='float32')[0] # 108 header['FAXI'] = np.fromfile(lf,count=1,dtype='int32')[0] # 109 resolution_section_length = np.fromfile(lf,count=1,dtype='int32')[0] # 110 if resolution_section_length != 12: warnings.warn("Invalid section length found for resolution section") #header['DOPP'] = np.fromfile(lf,count=1,dtype='float16')[0] # 110a ??? #header['VTYP'] = np.fromfile(lf,count=1,dtype='int16')[0] # 110b # integer, parameter :: vel_unk = 0 ! Unsupported referential :: planetary...) # integer, parameter :: vel_lsr = 1 ! LSR referential # integer, parameter :: vel_hel = 2 ! Heliocentric referential # integer, parameter :: vel_obs = 3 ! Observatory referential # integer, parameter :: vel_ear = 4 ! Earth-Moon barycenter referential # integer, parameter :: vel_aut = -1 ! Take referential from data header['BMAJ'] = np.fromfile(lf,count=1,dtype='float32')[0] # 111 header['BMIN'] = np.fromfile(lf,count=1,dtype='float32')[0] # 112 header['BPA'] = np.fromfile(lf,count=1,dtype='float32')[0] # 113 noise_section_length = np.fromfile(lf,count=1,dtype='int32') if noise_section_length != 0: warnings.warn("Invalid section length found for noise section") header['NOISE'] = np.fromfile(lf,count=1,dtype='float32')[0] # 115 header['RMS'] = np.fromfile(lf,count=1,dtype='float32')[0] # 116 astrometry_section_length = np.fromfile(lf,count=1,dtype='int32') if astrometry_section_length != 0: warnings.warn("Invalid section length found for astrometry section") header['MURA'] = np.fromfile(lf,count=1,dtype='float32')[0] # 118 header['MUDEC'] = np.fromfile(lf,count=1,dtype='float32')[0] # 119 header['PARALLAX'] = np.fromfile(lf,count=1,dtype='float32')[0] # 120 # Apparently CLASS headers aren't required to fill the 'value at # reference pixel' column if (header['CTYPE1'].strip() == 'RA' and header['CRVAL1'] == 0 and header['RA'] != 0): header['CRVAL1'] = header['RA'] header['CRVAL2'] = header['DEC'] # Copied from the type 2 reader: # Use the appropriate projection type ptyp = header['PTYP'] for kw in header: if 'CTYPE' in kw: if header[kw].strip() in cel_types: n_dashes = 5-len(header[kw].strip()) header[kw] = header[kw].strip()+ '-'*n_dashes + _proj_dict[ptyp] other_info = np.fromfile(lf, count=7, dtype='float32') # 121-end if not np.all(other_info == 0): warnings.warn("Found additional information in the last 7 bytes") endpoint = 508 if lf.tell() != endpoint: raise ValueError("Header was not parsed correctly") data = np.fromfile(lf, count=naxis1*naxis2*naxis3, dtype='float32') data[data == bval] = np.nan # for no apparent reason, y and z are 1-indexed and x is zero-indexed if (wmin-1,zmin-1,ymin-1,xmin) != np.unravel_index(np.nanargmin(data), [naxis4,naxis3,naxis2,naxis1]): warnings.warn("Data min location does not match that on file. " "Possible error reading data.") if (wmax-1,zmax-1,ymax-1,xmax) != np.unravel_index(np.nanargmax(data), [naxis4,naxis3,naxis2,naxis1]): warnings.warn("Data max location does not match that on file. " "Possible error reading data.") if np.nanmax(data) != vmax: warnings.warn("Data max does not match that on file. " "Possible error reading data.") if np.nanmin(data) != vmin: warnings.warn("Data min does not match that on file. " "Possible error reading data.") return data.reshape([naxis4,naxis3,naxis2,naxis1]),header # debug #return data.reshape([naxis3,naxis2,naxis1]), header, hdr_f, hdr_s, hdr_i, hdr_d, hdr_d_2 def read_lmv_tofits(fileobj): from astropy.io import fits data,header = read_lmv(fileobj) # LMV may contain extra dimensions that are improperly labeled data = data.squeeze() bad_kws = ['NAXIS4','CRVAL4','CRPIX4','CDELT4','CROTA4','CUNIT4','CTYPE4'] cards = [fits.header.Card(keyword=k, value=v[0], comment=v[1]) if isinstance(v, tuple) else fits.header.Card(''.join(s for s in k if s in string.printable), ''.join(s for s in v if s in string.printable) if isinstance(v, six.string_types) else v) for k,v in six.iteritems(header) if k not in bad_kws] Header = fits.Header(cards) hdu = fits.PrimaryHDU(data=data, header=Header) return hdu def load_lmv_cube(fileobj, target_cls=None, use_dask=None): hdu = read_lmv_tofits(fileobj) meta = {'filename':fileobj.name} return load_fits_cube(hdu, meta=meta, use_dask=use_dask) def _read_byte(f): '''Read a single byte (from idlsave)''' return np.uint8(struct.unpack('=B', f.read(4)[:1])[0]) def _read_int16(f): '''Read a signed 16-bit integer (from idlsave)''' return np.int16(struct.unpack('=h', f.read(4)[2:4])[0]) def _read_int32(f): '''Read a signed 32-bit integer (from idlsave)''' return np.int32(struct.unpack('=i', f.read(4))[0]) def _read_int64(f): '''Read a signed 64-bit integer ''' return np.int64(struct.unpack('=q', f.read(8))[0]) def _read_float32(f): '''Read a 32-bit float (from idlsave)''' return np.float32(struct.unpack('=f', f.read(4))[0]) def _read_string(f, size): '''Read a string of known maximum length''' return f.read(size).decode('utf-8').strip() def _read_float64(f): '''Read a 64-bit float (from idlsave)''' return np.float64(struct.unpack('=d', f.read(8))[0]) def _check_val(name, got,expected): if got != expected: log.warning("{2} = {0} instead of {1}".format(got, expected, name)) def read_lmv_type2(lf): """ See image_def.f90 """ header = {} lf.seek(12) # DONE before integer(kind=4) :: ijtyp(3) = 0 ! 1 Image Type # fmt probably matters! Default is "r4", i.e. float32 data, but could be float64 fmt = _read_int32(lf) # 4 # number of data blocks ndb = _read_int64(lf) # 5 nhb = _read_int32(lf) # 7 ntb = _read_int32(lf) # 8 version_gdf = _read_int32(lf) # 9 if version_gdf != 20: raise TypeError("Trying to read a version-2 file, but the version" " number is {0} (should be 20)".format(version_gdf)) type_gdf = _read_int32(lf) # 10 dim_start = _read_int32(lf) # 11 pad_trail = _read_int32(lf) # 12 if dim_start % 2 == 0: log.warning("Got even dim_start in lmv cube: this is not expected.") if dim_start > 17: log.warning("dim_start > 17 in lmv cube: this is not expected.") lf.seek(16*4) gdf_maxdims=7 dim_words = _read_int32(lf) # 17 if dim_words != 2*gdf_maxdims+2: log.warning("dim_words = {0} instead of {1}".format(dim_words, gdf_maxdims*2+2)) blan_start = _read_int32(lf) # 18 if blan_start != dim_start+dim_words+2: log.warning("blan_star = {0} instead of {1}".format(blan_start, dim_start+dim_words+2)) mdim = _read_int32(lf) # 19 ndim = _read_int32(lf) # 20 dims = np.fromfile(lf, count=gdf_maxdims, dtype='int64') if np.count_nonzero(dims) != ndim: raise ValueError("Disagreement between ndims and number of nonzero dims.") header['NAXIS'] = ndim valid_dims = [] for ii,dim in enumerate(dims): if dim != 0: header['NAXIS{0}'.format(ii+1)] = dim valid_dims.append(ii) blan_words = _read_int32(lf) if blan_words != 2: log.warning("blan_words = {0} instead of 2".format(blan_words)) extr_start = _read_int32(lf) bval = _read_float32(lf) # blanking value bval_tol = _read_float32(lf) # eval = tolerance # FITS requires integer BLANKs #header['BLANK'] = bval extr_words = _read_int32(lf) if extr_words != 6: log.warning("extr_words = {0} instead of 6".format(extr_words)) coor_start = _read_int32(lf) if coor_start != extr_start+extr_words+2: log.warning("coor_start = {0} instead of {1}".format(coor_start, extr_start+extr_words+2)) rmin = _read_float32(lf) rmax = _read_float32(lf) # position 168 minloc = _read_int64(lf) maxloc = _read_int64(lf) # lf.seek(184) coor_words = _read_int32(lf) if coor_words != gdf_maxdims*6: log.warning("coor_words = {0} instead of {1}".format(coor_words, gdf_maxdims*6)) desc_start = _read_int32(lf) if desc_start != coor_start+coor_words+2: log.warning("desc_start = {0} instead of {1}".format(desc_start, coor_start+coor_words+2)) convert = np.fromfile(lf, count=3*gdf_maxdims, dtype='float64').reshape([gdf_maxdims,3]) # conversion of "convert" to CRPIX/CRVAL/CDELT below desc_words = _read_int32(lf) if desc_words != 3*(gdf_maxdims+1): log.warning("desc_words = {0} instead of {1}".format(desc_words, 3*(gdf_maxdims+1))) null_start = _read_int32(lf) if null_start != desc_start+desc_words+2: log.warning("null_start = {0} instead of {1}".format(null_start, desc_start+desc_words+2)) ijuni = _read_string(lf, 12) # data unit ijcode = [_read_string(lf, 12) for ii in range(gdf_maxdims)] pad_desc = _read_int32(lf) if ijuni.lower() in _bunit_dict: header['BUNIT'] = (_bunit_dict[ijuni.lower()], ijuni) else: header['BUNIT'] = ijuni #! The first block length is thus #! s_dim-1 + (2*mdim+4) + (4) + (8) + (6*mdim+2) + (3*mdim+5) #! = s_dim-1 + mdim*(2+6+3) + (4+4+2+5+8) #! = s_dim-1 + 11*mdim + 23 #! With mdim = 7, s_dim=11, this is 110 spaces #! With mdim = 8, s_dim=11, this is 121 spaces #! MDIM > 8 would NOT fit in one block... #! #! Block 2: Ancillary information #! #! The same logic of Length + Pointer is used there too, although the #! length are fixed. Note rounding to even number for the pointer offsets #! in order to preserve alignement... #! lf.seek(512) posi_words = _read_int32(lf) _check_val('posi_words', posi_words, 15) proj_start = _read_int32(lf) source_name = _read_string(lf, 12) header['OBJECT'] = source_name coordinate_system = _read_string(lf, 12) header['RA'] = _read_float64(lf) header['DEC'] = _read_float64(lf) header['LII'] = _read_float64(lf) header['BII'] = _read_float64(lf) header['EPOCH'] = _read_float32(lf) #pad_posi = _read_float32(lf) #print pad_posi #raise ValueError("pad_posi should probably be 0?") #! PROJECTION #integer(kind=4) :: proj_words = 9 ! Projection length: 9 used + 1 padding #integer(kind=4) :: spec_start !! = proj_start + 12 #real(kind=8) :: a0 = 0.d0 ! 89 X of projection center #real(kind=8) :: d0 = 0.d0 ! 91 Y of projection center #real(kind=8) :: pang = 0.d0 ! 93 Projection angle #integer(kind=4) :: ptyp = p_none ! 88 Projection type (see p_... codes) #integer(kind=4) :: xaxi = 0 ! 95 X axis #integer(kind=4) :: yaxi = 0 ! 96 Y axis #integer(kind=4) :: pad_proj #! proj_words = _read_int32(lf) spec_start = _read_int32(lf) _check_val('spec_start', spec_start, proj_start+proj_words+2) if proj_words == 9: header['PROJ_A0'] = _read_float64(lf) header['PROJ_D0'] = _read_float64(lf) header['PROJPANG'] = _read_float64(lf) ptyp = _read_int32(lf) header['PROJXAXI'] = _read_int32(lf) header['PROJYAXI'] = _read_int32(lf) elif proj_words != 0: raise ValueError("Invalid # of projection keywords") for kw in header: if 'CTYPE' in kw: if header[kw].strip() in cel_types: n_dashes = 5-len(header[kw].strip()) header[kw] = header[kw].strip()+ '-'*n_dashes + _proj_dict[ptyp] for ii,((ref,val,inc),code) in enumerate(zip(convert,ijcode)): if ii in valid_dims: # jul14a gio/to_imfits.f90 line 284-313 if ptyp != 0 and (ii+1) in (header['PROJXAXI'], header['PROJYAXI']): #! Compute reference pixel so that VAL(REF) = 0 ref = ref - val/inc if (ii+1) == header['PROJXAXI']: val = header['PROJ_A0'] elif (ii+1) == header['PROJYAXI']: val = header['PROJ_D0'] else: raise ValueError("Impossible state - code bug.") val = val*r2deg inc = inc*r2deg rota = r2deg*header['PROJPANG'] elif code in ('RA', 'L', 'B', 'DEC', 'LII', 'BII', 'GLAT', 'GLON', 'LAT', 'LON'): val = val*r2deg inc = inc*r2deg rota = 0.0 # These are not implemented: prefer to maintain original units (we're # reading in to spectral_cube after all, no need to change units until the # output step) #elseif (code.eq.'FREQUENCY') then #val = val*1.0d6 ! MHz to Hz #inc = inc*1.0d6 #elseif (code.eq.'VELOCITY') then #code = 'VRAD' ! force VRAD instead of VELOCITY for CASA #val = val*1.0d3 ! km/s to m/s #inc = inc*1.0d3 header['CRPIX{0}'.format(ii+1)] = ref header['CRVAL{0}'.format(ii+1)] = val header['CDELT{0}'.format(ii+1)] = inc for ii,ctype in enumerate(ijcode): if ii in valid_dims: header['CTYPE{0}'.format(ii+1)] = _ctype_dict[ctype] header['CUNIT{0}'.format(ii+1)] = _cunit_dict[ctype] spec_words = _read_int32(lf) reso_start = _read_int32(lf) _check_val('reso_start', reso_start, proj_start+proj_words+2+spec_words+2) if spec_words == 14: header['FRES'] = _read_float64(lf) header['FIMA'] = _read_float64(lf) header['FREQ'] = _read_float64(lf) header['VRES'] = _read_float32(lf) header['VOFF'] = _read_float32(lf) header['DOPP'] = _read_float32(lf) header['FAXI'] = _read_int32(lf) header['LINENAME'] = _read_string(lf, 12) header['VTYPE'] = _read_int32(lf) elif spec_words != 0: raise ValueError("Invalid # of spectroscopic keywords") #! SPECTROSCOPY #integer(kind=4) :: spec_words = 14 ! Spectroscopy length: 14 used #integer(kind=4) :: reso_start !! = spec_words + 16 #real(kind=8) :: fres = 0.d0 !101 Frequency resolution #real(kind=8) :: fima = 0.d0 !103 Image frequency #real(kind=8) :: freq = 0.d0 !105 Rest Frequency #real(kind=4) :: vres = 0.0 !107 Velocity resolution #real(kind=4) :: voff = 0.0 !108 Velocity offset #real(kind=4) :: dopp = 0.0 ! Doppler factor #integer(kind=4) :: faxi = 0 !109 Frequency axis #integer(kind=4) :: ijlin(3) = 0 ! 98 Line name #integer(kind=4) :: vtyp = vel_unk ! Velocity type (see vel_... codes) reso_words = _read_int32(lf) nois_start = _read_int32(lf) _check_val('nois_start', nois_start, proj_start+proj_words+2+spec_words+2+reso_words+2) if reso_words == 3: header['BMAJ'] = _read_float32(lf) header['BMIN'] = _read_float32(lf) header['BPA'] = _read_float32(lf) #pad_reso = _read_float32(lf) elif reso_words != 0: raise ValueError("Invalid # of resolution keywords") #! RESOLUTION #integer(kind=4) :: reso_words = 3 ! Resolution length: 3 used + 1 padding #integer(kind=4) :: nois_start !! = reso_words + 6 #real(kind=4) :: majo = 0.0 !111 Major axis #real(kind=4) :: mino = 0.0 !112 Minor axis #real(kind=4) :: posa = 0.0 !113 Position angle #real(kind=4) :: pad_reso nois_words = _read_int32(lf) astr_start = _read_int32(lf) _check_val('astr_start', astr_start, proj_start+proj_words+2+spec_words+2+reso_words+2+nois_words+2) if nois_words == 2: header['NOISE_T'] = (_read_float32(lf), "Theoretical Noise") header['NOISERMS'] = (_read_float32(lf), "Measured (RMS) noise") elif nois_words != 0: raise ValueError("Invalid # of noise keywords") #! NOISE #integer(kind=4) :: nois_words = 2 ! Noise section length: 2 used #integer(kind=4) :: astr_start !! = s_nois + 4 #real(kind=4) :: noise = 0.0 ! 115 Theoretical noise #real(kind=4) :: rms = 0.0 ! 116 Actual noise astr_words = _read_int32(lf) uvda_start = _read_int32(lf) _check_val('uvda_start', uvda_start, proj_start+proj_words+2+spec_words+2+reso_words+2+nois_words+2+astr_words+2) if astr_words == 3: header['MURA'] = _read_float32(lf) header['MUDEC'] = _read_float32(lf) header['PARALLAX'] = _read_float32(lf) elif astr_words != 0: raise ValueError("Invalid # of astrometry keywords") #! ASTROMETRY #integer(kind=4) :: astr_words = 3 ! Proper motion section length: 3 used + 1 padding #integer(kind=4) :: uvda_start !! = s_astr + 4 #real(kind=4) :: mura = 0.0 ! 118 along RA, in mas/yr #real(kind=4) :: mudec = 0.0 ! 119 along Dec, in mas/yr #real(kind=4) :: parallax = 0.0 ! 120 in mas #real(kind=4) :: pad_astr #! real(kind=4) :: pepoch = 2000.0 ! 121 in yrs ? code_uvt_last=25 uvda_words = _read_int32(lf) void_start = _read_int32(lf) _check_val('void_start', void_start, proj_start + proj_words + 2 + spec_words + 2 + reso_words + 2 + nois_words + 2 + astr_words + 2 + uvda_words + 2) if uvda_words == 18+2*code_uvt_last: version_uv = _read_int32(lf) nchan = _read_int32(lf) nvisi = _read_int64(lf) nstokes = _read_int32(lf) natom = _read_int32(lf) basemin = _read_float32(lf) basemax = _read_float32(lf) fcol = _read_int32(lf) lcol = _read_int32(lf) nlead = _read_int32(lf) ntrail = _read_int32(lf) column_pointer = np.fromfile(lf, count=code_uvt_last, dtype='int32') column_size = np.fromfile(lf, count=code_uvt_last, dtype='int32') column_codes = np.fromfile(lf, count=nlead+ntrail, dtype='int32') column_types = np.fromfile(lf, count=nlead+ntrail, dtype='int32') order = _read_int32(lf) nfreq = _read_int32(lf) atoms = np.fromfile(lf, count=4, dtype='int32') elif uvda_words != 0: raise ValueError("Invalid # of UV data keywords") #! UV_DATA information #integer(kind=4) :: uvda_words = 18+2*code_uvt_last ! Length of section: 14 used #integer(kind=4) :: void_start !! = s_uvda + l_uvda + 2 #integer(kind=4) :: version_uv = code_version_uvt_current ! 1 version number. Will allow us to change the data format #integer(kind=4) :: nchan = 0 ! 2 Number of channels #integer(kind=8) :: nvisi = 0 ! 3-4 Independent of the transposition status #integer(kind=4) :: nstokes = 0 ! 5 Number of polarizations #integer(kind=4) :: natom = 0 ! 6. 3 for real, imaginary, weight. 1 for real. #real(kind=4) :: basemin = 0. ! 7 Minimum Baseline #real(kind=4) :: basemax = 0. ! 8 Maximum Baseline #integer(kind=4) :: fcol ! 9 Column of first channel #integer(kind=4) :: lcol ! 10 Column of last channel #! The number of information per channel can be obtained by #! (lcol-fcol+1)/(nchan*natom) #! so this could allow to derive the number of Stokes parameters #! Leading data at start of each visibility contains specific information #integer(kind=4) :: nlead = 7 ! 11 Number of leading informations (at lest 7) #! Trailing data at end of each visibility may hold additional information #integer(kind=4) :: ntrail = 0 ! 12 Number of trailing informations #! #! Leading / Trailing information codes have been specified before #integer(kind=4) :: column_pointer(code_uvt_last) = code_null ! Back pointer to the columns... #integer(kind=4) :: column_size(code_uvt_last) = 0 ! Number of columns for each #! In the data, we instead have the codes for each column #! integer(kind=4) :: column_codes(nlead+ntrail) ! Start column for each ... #! integer(kind=4) :: column_types(nlead+ntrail) /0,1,2/ ! Number of columns for each: 1 real*4, 2 real*8 #! Leading / Trailing information codes #! #integer(kind=4) :: order = 0 ! 13 Stoke/Channel ordering #integer(kind=4) :: nfreq = 0 ! 14 ! 0 or = nchan*nstokes #integer(kind=4) :: atoms(4) ! 15-18 Atom description #! #real(kind=8), pointer :: freqs(:) => null() ! (nchan*nstokes) = 0d0 #integer(kind=4), pointer :: stokes(:) => null() ! (nchan*nstokes) or (nstokes) = code_stoke #! #real(kind=8), pointer :: ref(:) => null() #real(kind=8), pointer :: val(:) => null() #real(kind=8), pointer :: inc(:) => null() lf.seek(1024) real_dims = dims[:ndim] data = np.fromfile(lf, count=np.product(real_dims), dtype='float32').reshape(real_dims[::-1]) data[data==bval] = np.nan return data,header io_registry.register_reader('lmv', BaseSpectralCube, load_lmv_cube) io_registry.register_reader('class_lmv', BaseSpectralCube, load_lmv_cube) io_registry.register_identifier('lmv', BaseSpectralCube, is_lmv) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1633017852.0 spectral-cube-0.6.0/spectral_cube/io/core.py0000644000175100001710000001565600000000000020347 0ustar00runnerdocker# The read and write methods for SpectralCube, StokesSpectralCube, and # LowerDimensionalObject are defined in this file and then added to the classes # using UnifiedReadWriteMethod. This makes it possible to dynamically add the # available formats to the read/write docstrings. For more information about # the unified I/O framework from Astropy which is used to implement this, see # http://docs.astropy.org/en/stable/io/unified.html from __future__ import print_function, absolute_import, division from pathlib import PosixPath import warnings from astropy.io import registry from ..utils import StokesWarning __doctest_skip__ = ['SpectralCubeRead', 'SpectralCubeWrite', 'StokesSpectralCubeRead', 'StokesSpectralCubeWrite', 'LowerDimensionalObjectWrite'] DOCSTRING_READ_TEMPLATE = """ Read and parse a dataset and return as a {clsname} This allows easily reading a dataset in several supported data formats using syntax such as:: >>> from spectral_cube import {clsname} >>> cube1 = {clsname}.read('cube.fits', format='fits') >>> cube2 = {clsname}.read('cube.image', format='casa') {notes} Get help on the available readers for {clsname} using the``help()`` method:: >>> {clsname}.read.help() # Get help reading {clsname} and list supported formats >>> {clsname}.read.help('fits') # Get detailed help on {clsname} FITS reader >>> {clsname}.read.list_formats() # Print list of available formats See also: http://docs.astropy.org/en/stable/io/unified.html Parameters ---------- *args : tuple, optional Positional arguments passed through to data reader. If supplied the first argument is typically the input filename. format : str File format specifier. **kwargs : dict, optional Keyword arguments passed through to data reader. Returns ------- cube : `{clsname}` {clsname} corresponding to dataset Notes ----- """ DOCSTRING_WRITE_TEMPLATE = """ Write this {clsname} object out in the specified format. This allows easily writing a dataset in many supported data formats using syntax such as:: >>> data.write('data.fits', format='fits') Get help on the available writers for {clsname} using the``help()`` method:: >>> {clsname}.write.help() # Get help writing {clsname} and list supported formats >>> {clsname}.write.help('fits') # Get detailed help on {clsname} FITS writer >>> {clsname}.write.list_formats() # Print list of available formats See also: http://docs.astropy.org/en/stable/io/unified.html Parameters ---------- *args : tuple, optional Positional arguments passed through to data writer. If supplied the first argument is the output filename. format : str File format specifier. **kwargs : dict, optional Keyword arguments passed through to data writer. Notes ----- """ # Due to a bug in the astropy I/O infrastructure which causes an exception # for directories (which we need for .image), we need to wrap the filenames in # a custom string so that astropy doesn't try and call get_readable_fileobj on # them. class StringWrapper: def __init__(self, value): self.value = value class SpectralCubeRead(registry.UnifiedReadWrite): __doc__ = DOCSTRING_READ_TEMPLATE.format(clsname='SpectralCube', notes="If the file contains Stokes axes, they will automatically be dropped. If " "you want to read in all Stokes informtion, use " ":meth:`~spectral_cube.StokesSpectralCube.read` instead.") def __init__(self, instance, cls): super().__init__(instance, cls, 'read') def __call__(self, filename, *args, **kwargs): from ..spectral_cube import BaseSpectralCube if isinstance(filename, PosixPath): filename = str(filename) kwargs['target_cls'] = BaseSpectralCube try: return registry.read(BaseSpectralCube, filename, *args, **kwargs) except IsADirectoryError: # See note above StringWrapper return registry.read(BaseSpectralCube, StringWrapper(filename), *args, **kwargs) class SpectralCubeWrite(registry.UnifiedReadWrite): __doc__ = DOCSTRING_WRITE_TEMPLATE.format(clsname='SpectralCube') def __init__(self, instance, cls): super().__init__(instance, cls, 'write') def __call__(self, *args, serialize_method=None, **kwargs): registry.write(self._instance, *args, **kwargs) class StokesSpectralCubeRead(registry.UnifiedReadWrite): __doc__ = DOCSTRING_READ_TEMPLATE.format(clsname='StokesSpectralCube', notes="If the file contains Stokes axes, they will be read in. If you are only " "interested in the unpolarized emission (I), you can use " ":meth:`~spectral_cube.SpectralCube.read` instead.") def __init__(self, instance, cls): super().__init__(instance, cls, 'read') def __call__(self, filename, *args, **kwargs): from ..stokes_spectral_cube import StokesSpectralCube if isinstance(filename, PosixPath): filename = str(filename) kwargs['target_cls'] = StokesSpectralCube try: return registry.read(StokesSpectralCube, filename, *args, **kwargs) except IsADirectoryError: # See note above StringWrapper return registry.read(StokesSpectralCube, StringWrapper(filename), *args, **kwargs) class StokesSpectralCubeWrite(registry.UnifiedReadWrite): __doc__ = DOCSTRING_WRITE_TEMPLATE.format(clsname='StokesSpectralCube') def __init__(self, instance, cls): super().__init__(instance, cls, 'write') def __call__(self, *args, serialize_method=None, **kwargs): registry.write(self._instance, *args, **kwargs) class LowerDimensionalObjectWrite(registry.UnifiedReadWrite): __doc__ = DOCSTRING_WRITE_TEMPLATE.format(clsname='LowerDimensionalObject') def __init__(self, instance, cls): super().__init__(instance, cls, 'write') def __call__(self, *args, serialize_method=None, **kwargs): registry.write(self._instance, *args, **kwargs) def normalize_cube_stokes(cube, target_cls=None): from ..spectral_cube import BaseSpectralCube from ..stokes_spectral_cube import StokesSpectralCube if target_cls is BaseSpectralCube and isinstance(cube, StokesSpectralCube): if hasattr(cube, 'I'): warnings.warn("Cube is a Stokes cube, " "returning spectral cube for I component", StokesWarning) return cube.I else: raise ValueError("Spectral cube is a Stokes cube that " "does not have an I component") elif target_cls is StokesSpectralCube and isinstance(cube, BaseSpectralCube): return StokesSpectralCube({'I': cube}) else: return cube ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1633017852.0 spectral-cube-0.6.0/spectral_cube/io/fits.py0000644000175100001710000002574000000000000020357 0ustar00runnerdockerfrom __future__ import print_function, absolute_import, division import six import dask import warnings from astropy.io import fits from astropy.io import registry as io_registry import astropy.wcs from astropy.wcs import WCS from collections import OrderedDict from astropy.io.fits.hdu.hdulist import fitsopen as fits_open from astropy.io.fits.connect import FITS_SIGNATURE from astropy import units as u import numpy as np import datetime try: from .. import version SPECTRAL_CUBE_VERSION = version.version except ImportError: # We might be running py.test on a clean checkout SPECTRAL_CUBE_VERSION = 'dev' from .. import SpectralCube, StokesSpectralCube, LazyMask, VaryingResolutionSpectralCube from ..dask_spectral_cube import DaskSpectralCube, DaskVaryingResolutionSpectralCube from ..lower_dimensional_structures import LowerDimensionalObject from ..spectral_cube import BaseSpectralCube from .. import cube_utils from ..utils import BeamUnitsError, FITSWarning, FITSReadError, StokesWarning def first(iterable): return next(iter(iterable)) def is_fits(origin, filepath, fileobj, *args, **kwargs): """ Determine whether `origin` is a FITS file. Parameters ---------- origin : str or readable file-like object Path or file object containing a potential FITS file. Returns ------- is_fits : bool Returns `True` if the given file is a FITS file. """ if fileobj is not None: pos = fileobj.tell() sig = fileobj.read(30) fileobj.seek(pos) return sig == FITS_SIGNATURE elif filepath is not None: if filepath.lower().endswith(('.fits', '.fits.gz', '.fit', '.fit.gz', '.fts', '.fts.gz')): return True elif isinstance(args[0], (fits.HDUList, fits.ImageHDU, fits.PrimaryHDU)): return True else: return False def read_data_fits(input, hdu=None, mode='denywrite', **kwargs): """ Read an array and header from an FITS file. Parameters ---------- input : str or compatible `astropy.io.fits` HDU object If a string, the filename to read the table from. The following `astropy.io.fits` HDU objects can be used as input: - :class:`~astropy.io.fits.hdu.table.PrimaryHDU` - :class:`~astropy.io.fits.hdu.table.ImageHDU` - :class:`~astropy.io.fits.hdu.hdulist.HDUList` hdu : int or str, optional The HDU to read the table from. mode : str One of the FITS file reading modes; see `~astropy.io.fits.open`. ``denywrite`` is used by default since this prevents the system from checking that the entire cube will fit into swap, which can prevent the file from being opened at all. """ beam_table = None beam_units = (u.arcsec, u.arcsec) if isinstance(input, fits.HDUList): # Parse all array objects arrays = OrderedDict() for ihdu, hdu_item in enumerate(input): if isinstance(hdu_item, (fits.PrimaryHDU, fits.ImageHDU)): arrays[ihdu] = hdu_item elif isinstance(hdu_item, fits.BinTableHDU): if 'BPA' in hdu_item.data.names: beam_table = hdu_item.data # Check that the table has the expected form for beam units: # 1: BMAJ 2: BMIN 3: BPA for i in range(1, 4): if not f"TUNIT{i}" in hdu_item.header: raise BeamUnitsError(f"Missing beam units keyword {key}{i}" " in the header.") # Read the bmaj/bmin units from the header # (we still assume BPA is degrees because we've never seen an exceptional case) # this will crash if there is no appropriate header info maj_kw = [kw for kw, val in hdu_item.header.items() if val == 'BMAJ'][0] min_kw = [kw for kw, val in hdu_item.header.items() if val == 'BMIN'][0] maj_unit = hdu_item.header[maj_kw.replace('TTYPE', 'TUNIT')] min_unit = hdu_item.header[min_kw.replace('TTYPE', 'TUNIT')] # AIPS uses non-FITS-standard unit names; this catches the # only case we've seen so far if maj_unit == 'DEGREES': maj_unit = 'degree' if min_unit == 'DEGREES': min_unit = 'degree' maj_unit = u.Unit(maj_unit) min_unit = u.Unit(min_unit) beam_units = (maj_unit, min_unit) if len(arrays) > 1: if hdu is None: hdu = first(arrays) warnings.warn("hdu= was not specified but multiple arrays" " are present, reading in first available" " array (hdu={0})".format(hdu), FITSWarning ) # hdu might not be an integer, so we first need to convert it # to the correct HDU index hdu = input.index_of(hdu) if hdu in arrays: array_hdu = arrays[hdu] else: raise ValueError("No array found in hdu={0}".format(hdu)) elif len(arrays) == 1: array_hdu = arrays[first(arrays)] else: raise ValueError("No arrays found") elif isinstance(input, (fits.PrimaryHDU, fits.ImageHDU)): array_hdu = input else: if hasattr(input, 'read'): mode = None with fits_open(input, mode=mode, **kwargs) as hdulist: return read_data_fits(hdulist, hdu=hdu) return array_hdu.data, array_hdu.header, beam_table, beam_units def load_fits_cube(input, hdu=0, meta=None, target_cls=None, use_dask=False, **kwargs): """ Read in a cube from a FITS file using astropy. Parameters ---------- input: str or HDU The FITS cube file name or HDU hdu: int The extension number containing the data to be read meta: dict Metadata (can be inherited from other readers, for example) """ if use_dask: SC = DaskSpectralCube VRSC = DaskVaryingResolutionSpectralCube else: SC = SpectralCube VRSC = VaryingResolutionSpectralCube data, header, beam_table, beam_units = read_data_fits(input, hdu=hdu, **kwargs) if data is None: raise FITSReadError('No data found in HDU {0}. You can try using the hdu= ' 'keyword argument to read data from another HDU.'.format(hdu)) if meta is None: meta = {} if 'BUNIT' in header: meta['BUNIT'] = header['BUNIT'] with warnings.catch_warnings(): warnings.filterwarnings('ignore', category=astropy.wcs.FITSFixedWarning, append=True) wcs = WCS(header) if wcs.wcs.naxis == 3: data, wcs = cube_utils._orient(data, wcs) mask = LazyMask(np.isfinite, data=data, wcs=wcs) assert data.shape == mask._data.shape if beam_table is None: cube = SC(data, wcs, mask, meta=meta, header=header) else: cube = VRSC(data, wcs, mask, meta=meta, header=header, beam_table=beam_table, major_unit=beam_units[0], minor_unit=beam_units[1]) if hasattr(cube._mask, '_data'): # check that the shape matches if there is a shape # it is possible that VaryingResolution cubes will have a composite # mask instead assert cube._data.shape == cube._mask._data.shape elif wcs.wcs.naxis == 4: data, wcs = cube_utils._split_stokes(data, wcs) stokes_data = {} for component in data: comp_data, comp_wcs = cube_utils._orient(data[component], wcs) comp_mask = LazyMask(np.isfinite, data=comp_data, wcs=comp_wcs) if beam_table is None: stokes_data[component] = SC(comp_data, wcs=comp_wcs, mask=comp_mask, meta=meta, header=header) else: stokes_data[component] = VRSC(comp_data, wcs=comp_wcs, mask=comp_mask, meta=meta, header=header, beam_table=beam_table, major_unit=beam_units[0], minor_unit=beam_units[1] ) cube = StokesSpectralCube(stokes_data) else: raise FITSReadError("Data should be 3- or 4-dimensional") from .core import normalize_cube_stokes return normalize_cube_stokes(cube, target_cls=target_cls) def write_fits_cube(cube, filename, overwrite=False, include_origin_notes=True): """ Write a FITS cube with a WCS to a filename """ if isinstance(cube, BaseSpectralCube): hdulist = cube.hdulist if include_origin_notes: now = datetime.datetime.strftime(datetime.datetime.now(), "%Y/%m/%d-%H:%M:%S") hdulist[0].header.add_history("Written by spectral_cube v{version} on " "{date}".format(version=SPECTRAL_CUBE_VERSION, date=now)) try: hdulist.writeto(filename, overwrite=overwrite) except TypeError: hdulist.writeto(filename, clobber=overwrite) else: raise NotImplementedError() def write_fits_ldo(data, filename, overwrite=False): # Spectra may have HDUList objects instead of HDUs because they # have a beam table attached, so we want to try that first # (a more elegant way to write this might be to do "self._hdu_general.write" # and create a property `self._hdu_general` that selects the right one...) if hasattr(data, 'hdulist'): try: data.hdulist.writeto(filename, overwrite=overwrite) except TypeError: data.hdulist.writeto(filename, clobber=overwrite) elif hasattr(data, 'hdu'): try: data.hdu.writeto(filename, overwrite=overwrite) except TypeError: data.hdu.writeto(filename, clobber=overwrite) io_registry.register_reader('fits', BaseSpectralCube, load_fits_cube) io_registry.register_writer('fits', BaseSpectralCube, write_fits_cube) io_registry.register_identifier('fits', BaseSpectralCube, is_fits) io_registry.register_reader('fits', StokesSpectralCube, load_fits_cube) io_registry.register_writer('fits', StokesSpectralCube, write_fits_cube) io_registry.register_identifier('fits', StokesSpectralCube, is_fits) io_registry.register_writer('fits', LowerDimensionalObject, write_fits_ldo) io_registry.register_identifier('fits', LowerDimensionalObject, is_fits) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1633017852.0 spectral-cube-0.6.0/spectral_cube/lower_dimensional_structures.py0000644000175100001710000011753600000000000025025 0ustar00runnerdockerfrom __future__ import print_function, absolute_import, division import warnings import numpy as np from numpy.ma.core import nomask import dask.array as da from astropy import convolution from astropy import units as u from astropy import wcs #from astropy import log from astropy.io.fits import Header, HDUList, PrimaryHDU, BinTableHDU, FITS_rec from radio_beam import Beam, Beams from astropy.io.registry import UnifiedReadWriteMethod from . import spectral_axis from .io.core import LowerDimensionalObjectWrite from .utils import SliceWarning, BeamWarning, SmoothingWarning, FITSWarning, BeamUnitsError from . import cube_utils from . import wcs_utils from .masks import BooleanArrayMask, MaskBase from .base_class import (BaseNDClass, SpectralAxisMixinClass, SpatialCoordMixinClass, MaskableArrayMixinClass, MultiBeamMixinClass, BeamMixinClass, HeaderMixinClass ) __all__ = ['LowerDimensionalObject', 'Projection', 'Slice', 'OneDSpectrum'] class LowerDimensionalObject(u.Quantity, BaseNDClass, HeaderMixinClass): """ Generic class for 1D and 2D objects. """ @property def hdu(self): if self.wcs is None: hdu = PrimaryHDU(self.value) else: hdu = PrimaryHDU(self.value, header=self.header) hdu.header['BUNIT'] = self.unit.to_string(format='fits') if 'beam' in self.meta: hdu.header.update(self.meta['beam'].to_header_keywords()) return hdu def read(self, *args, **kwargs): raise NotImplementedError() write = UnifiedReadWriteMethod(LowerDimensionalObjectWrite) def __getslice__(self, start, end, increment=None): # I don't know why this is needed, but apparently one of the inherited # classes implements getslice, which forces us to overwrite it # I can't find any examples where __getslice__ is actually implemented, # though, so this seems like a deep and frightening bug. #log.debug("Getting a slice from {0} to {1}".format(start,end)) return self.__getitem__(slice(start, end, increment)) def __getitem__(self, key, **kwargs): """ Return a new `~spectral_cube.lower_dimensional_structures.LowerDimensionalObject` of the same class while keeping other properties fixed. """ new_qty = super(LowerDimensionalObject, self).__getitem__(key) if new_qty.ndim < 2: # do not return a projection return u.Quantity(new_qty) if self._wcs is not None: if ((isinstance(key, tuple) and any(isinstance(k, slice) for k in key) and len(key) > self.ndim)): # Example cases include: indexing tricks like [:,:,None] warnings.warn("Slice {0} cannot be used on this {1}-dimensional" " array's WCS. If this is intentional, you " " should use this {2}'s ``array`` or ``quantity``" " attribute." .format(key, self.ndim, type(self)), SliceWarning ) return self.quantity[key] else: newwcs = self._wcs[key] else: newwcs = None new = self.__class__(value=new_qty.value, unit=new_qty.unit, copy=False, wcs=newwcs, meta=self._meta, mask=(self._mask[key] if self._mask is not nomask else None), header=self._header, **kwargs) new._wcs = newwcs new._meta = self._meta new._mask=(self._mask[key] if self._mask is not nomask else nomask) new._header = self._header return new def __array_finalize__(self, obj): self._wcs = getattr(obj, '_wcs', None) self._meta = getattr(obj, '_meta', None) self._mask = getattr(obj, '_mask', None) self._header = getattr(obj, '_header', None) self._spectral_unit = getattr(obj, '_spectral_unit', None) self._fill_value = getattr(obj, '_fill_value', np.nan) self._wcs_tolerance = getattr(obj, '_wcs_tolerance', 0.0) if isinstance(obj, VaryingResolutionOneDSpectrum): self._beams = getattr(obj, '_beams', None) else: self._beam = getattr(obj, '_beam', None) super(LowerDimensionalObject, self).__array_finalize__(obj) @property def __array_priority__(self): return super(LowerDimensionalObject, self).__array_priority__*2 @property def array(self): """ Get a pure array representation of the LDO. Useful when multiplying and using numpy indexing tricks. """ return np.asarray(self) @property def _data(self): # the _data property is required by several other mixins # (which probably means defining it here is a bad design) return self.array @property def quantity(self): """ Get a pure `~astropy.units.Quantity` representation of the LDO. """ return u.Quantity(self) def to(self, unit, equivalencies=[], freq=None): """ Return a new `~spectral_cube.lower_dimensional_structures.Projection` of the same class with the specified unit. See `astropy.units.Quantity.to` for further details. """ if not isinstance(unit, u.Unit): unit = u.Unit(unit) if unit == self.unit: # No copying return self if hasattr(self, 'with_spectral_unit'): freq = self.with_spectral_unit(u.Hz).spectral_axis if freq is None and 'RESTFRQ' in self.header: freq = self.header['RESTFRQ'] * u.Hz # Create the tuple of unit conversions needed. factor = cube_utils.bunit_converters(self, unit, equivalencies=equivalencies, freq=freq) converted_array = (self.quantity * factor).value # use private versions of variables, not the generated property # versions # Not entirely sure the use of __class__ here is kosher, but we do want # self.__class__, not super() new = self.__class__(value=converted_array, unit=unit, copy=True, wcs=self._wcs, meta=self._meta, mask=self._mask, header=self._header) return new @property def _mask(self): """ Annoying hack to deal with np.ma.core.is_mask failures (I don't like using __ but I think it's necessary here)""" if self.__mask is None: # need this to be *exactly* the numpy boolean False return nomask return self.__mask @_mask.setter def _mask(self, value): self.__mask = value def shrink_mask(self): """ Copy of the numpy masked_array shrink_mask method. This is essentially a hack needed for matplotlib to show images. """ m = self._mask if m.ndim and not m.any(): self._mask = nomask return self def _initial_set_mask(self, mask): """ Helper tool to validate mask when originally setting it in __new__ Note that because this is intended to be used in __new__, order matters: ``self`` must have ``_wcs``, for example. """ if mask is None: mask = BooleanArrayMask(np.ones_like(self.value, dtype=bool), self._wcs, shape=self.value.shape) elif isinstance(mask, np.ndarray): if mask.shape != self.value.shape: raise ValueError("Mask shape must match the {0} shape." .format(self.__class__.__name__) ) mask = BooleanArrayMask(mask, self._wcs, shape=self.value.shape) elif isinstance(mask, MaskBase): pass else: raise TypeError("mask of type {} is not a supported mask " "type.".format(type(mask))) # Validate the mask before setting mask._validate_wcs(new_data=self.value, new_wcs=self._wcs, wcs_tolerance=self._wcs_tolerance) self._mask = mask class Projection(LowerDimensionalObject, SpatialCoordMixinClass, MaskableArrayMixinClass, BeamMixinClass): def __new__(cls, value, unit=None, dtype=None, copy=True, wcs=None, meta=None, mask=None, header=None, beam=None, fill_value=np.nan, read_beam=False, wcs_tolerance=0.0): if np.asarray(value).ndim != 2: raise ValueError("value should be a 2-d array") if wcs is not None and wcs.wcs.naxis != 2: raise ValueError("wcs should have two dimension") self = u.Quantity.__new__(cls, value, unit=unit, dtype=dtype, copy=copy).view(cls) self._wcs = wcs self._meta = {} if meta is None else meta self._wcs_tolerance = wcs_tolerance self._initial_set_mask(mask) self._fill_value = fill_value if header is not None: self._header = header else: self._header = Header() if beam is None: if "beam" in self.meta: beam = self.meta['beam'] elif read_beam: beam = cube_utils.try_load_beam(header) if beam is None: warnings.warn("Cannot load beam from header.", BeamWarning ) if beam is not None: self.beam = beam self.meta['beam'] = beam # TODO: Enable header updating when non-celestial slices are # properly handled in the WCS object. # self._header.update(beam.to_header_keywords()) self._cache = {} return self def with_beam(self, beam, raise_error_jybm=True): ''' Attach a new beam object to the Projection. Parameters ---------- beam : `~radio_beam.Beam` A new beam object. ''' if not isinstance(beam, Beam): raise TypeError("beam must be a radio_beam.Beam object.") self.check_jybeam_smoothing(raise_error_jybm=raise_error_jybm) meta = self.meta.copy() meta['beam'] = beam return self._new_projection_with(beam=beam, meta=meta) def with_fill_value(self, fill_value): """ Create a new :class:`Projection` or :class:`Slice` with a different ``fill_value``. """ return self._new_projection_with(fill_value=fill_value) @property def _new_thing_with(self): return self._new_projection_with def _new_projection_with(self, data=None, wcs=None, mask=None, meta=None, fill_value=None, spectral_unit=None, unit=None, header=None, wcs_tolerance=None, beam=None, **kwargs): data = self._data if data is None else data if unit is None and hasattr(data, 'unit'): if data.unit != self.unit: raise u.UnitsError("New data unit '{0}' does not" " match unit '{1}'. You can" " override this by specifying the" " `unit` keyword." .format(data.unit, self.unit)) unit = data.unit elif unit is None: unit = self.unit elif unit is not None: # convert string units to Units if not isinstance(unit, u.Unit): unit = u.Unit(unit) if hasattr(data, 'unit'): if u.Unit(unit) != data.unit: raise u.UnitsError("The specified new cube unit '{0}' " "does not match the input unit '{1}'." .format(unit, data.unit)) else: data = u.Quantity(data, unit=unit, copy=False) wcs = self._wcs if wcs is None else wcs mask = self._mask if mask is None else mask if meta is None: meta = {} meta.update(self._meta) if unit is not None: meta['BUNIT'] = unit.to_string(format='FITS') fill_value = self._fill_value if fill_value is None else fill_value if beam is None: if hasattr(self, 'beam'): beam = self.beam newproj = self.__class__(value=data, wcs=wcs, mask=mask, meta=meta, unit=unit, fill_value=fill_value, header=header or self._header, wcs_tolerance=wcs_tolerance or self._wcs_tolerance, beam=beam, **kwargs) return newproj @staticmethod def from_hdu(hdu): ''' Return a projection from a FITS HDU. ''' if isinstance(hdu, HDUList): hdul = hdu hdu = hdul[0] if not len(hdu.data.shape) == 2: raise ValueError("HDU must contain two-dimensional data.") meta = {} mywcs = wcs.WCS(hdu.header) if "BUNIT" in hdu.header: unit = cube_utils.convert_bunit(hdu.header["BUNIT"]) meta["BUNIT"] = hdu.header["BUNIT"] else: unit = None beam = cube_utils.try_load_beam(hdu.header) self = Projection(hdu.data, unit=unit, wcs=mywcs, meta=meta, header=hdu.header, beam=beam) return self def quicklook(self, filename=None, use_aplpy=True, aplpy_kwargs={}): """ Use `APLpy `_ to make a quick-look image of the projection. This will make the ``FITSFigure`` attribute available. If there are unmatched celestial axes, this will instead show an image without axis labels. Parameters ---------- filename : str or Non Optional - the filename to save the quicklook to. """ if use_aplpy: try: if not hasattr(self, 'FITSFigure'): import aplpy self.FITSFigure = aplpy.FITSFigure(self.hdu, **aplpy_kwargs) self.FITSFigure.show_grayscale() self.FITSFigure.add_colorbar() if filename is not None: self.FITSFigure.save(filename) except (wcs.InconsistentAxisTypesError, ImportError): self._quicklook_mpl(filename=filename) else: self._quicklook_mpl(filename=filename) def _quicklook_mpl(self, filename=None): from matplotlib import pyplot self.figure = pyplot.gcf() self.image = pyplot.imshow(self.value) if filename is not None: self.figure.savefig(filename) def convolve_to(self, beam, convolve=convolution.convolve_fft, **kwargs): """ Convolve the image to a specified beam. Parameters ---------- beam : `radio_beam.Beam` The beam to convolve to convolve : function The astropy convolution function to use, either `astropy.convolution.convolve` or `astropy.convolution.convolve_fft` Returns ------- proj : `Projection` A Projection convolved to the given ``beam`` object. """ self._raise_wcs_no_celestial() if not hasattr(self, 'beam'): raise ValueError("No beam is contained in Projection.meta.") # Check if the beams are the same. if beam == self.beam: warnings.warn("The given beam is identical to the current beam. " "Skipping convolution.") return self pixscale = wcs.utils.proj_plane_pixel_area(self.wcs.celestial)**0.5 * u.deg convolution_kernel = \ beam.deconvolve(self.beam).as_kernel(pixscale) newdata = convolve(self.value, convolution_kernel, normalize_kernel=True, **kwargs) self = Projection(newdata, unit=self.unit, wcs=self.wcs, meta=self.meta, header=self.header, beam=beam) return self def reproject(self, header, order='bilinear'): """ Reproject the image into a new header. Parameters ---------- header : `astropy.io.fits.Header` A header specifying a cube in valid WCS order : int or str, optional The order of the interpolation (if ``mode`` is set to ``'interpolation'``). This can be either one of the following strings: * 'nearest-neighbor' * 'bilinear' * 'biquadratic' * 'bicubic' or an integer. A value of ``0`` indicates nearest neighbor interpolation. """ self._raise_wcs_no_celestial() try: from reproject.version import version except ImportError: raise ImportError("Requires the reproject package to be" " installed.") # Need version > 0.2 to work with cubes from distutils.version import LooseVersion if LooseVersion(version) < "0.3": raise Warning("Requires version >=0.3 of reproject. The current " "version is: {}".format(version)) from reproject import reproject_interp # TODO: Find the minimal footprint that contains the header and only reproject that # (see FITS_tools.regrid_cube for a guide on how to do this) newwcs = wcs.WCS(header) shape_out = [header['NAXIS{0}'.format(i + 1)] for i in range(header['NAXIS'])][::-1] newproj, newproj_valid = reproject_interp((self.value, self.header), newwcs, shape_out=shape_out, order=order) self = Projection(newproj, unit=self.unit, wcs=newwcs, meta=self.meta, header=header, read_beam=True) return self def subimage(self, xlo='min', xhi='max', ylo='min', yhi='max'): """ Extract a region spatially. When spatial WCS dimensions are given as an `~astropy.units.Quantity`, the spatial coordinates of the 'lo' and 'hi' corners are solved together. This minimizes WCS variations due to the sky curvature when slicing from a large (>1 deg) image. Parameters ---------- [xy]lo/[xy]hi : int or `astropy.units.Quantity` or `min`/`max` The endpoints to extract. If given as a quantity, will be interpreted as World coordinates. If given as a string or int, will be interpreted as pixel coordinates. """ self._raise_wcs_no_celestial() # Solve for the spatial pixel indices together limit_dict = wcs_utils.find_spatial_pixel_index(self, xlo, xhi, ylo, yhi) slices = [slice(limit_dict[xx + 'lo'], limit_dict[xx + 'hi']) for xx in 'yx'] return self[tuple(slices)] def to(self, unit, equivalencies=[], freq=None): """ Return a new `~spectral_cube.lower_dimensional_structures.Projection` of the same class with the specified unit. See `astropy.units.Quantity.to` for further details. """ return super(Projection, self).to(unit, equivalencies, freq) # A slice is just like a projection in every way class Slice(Projection): pass class BaseOneDSpectrum(LowerDimensionalObject, MaskableArrayMixinClass, SpectralAxisMixinClass): """ Properties shared between OneDSpectrum and VaryingResolutionOneDSpectrum. """ def __new__(cls, value, unit=None, dtype=None, copy=True, wcs=None, meta=None, mask=None, header=None, spectral_unit=None, fill_value=np.nan, wcs_tolerance=0.0): #log.debug("Creating a OneDSpectrum with __new__") if np.asarray(value).ndim != 1: raise ValueError("value should be a 1-d array") if wcs is not None and wcs.wcs.naxis != 1: raise ValueError("wcs should have two dimension") self = u.Quantity.__new__(cls, value, unit=unit, dtype=dtype, copy=copy).view(cls) self._wcs = wcs self._meta = {} if meta is None else meta self._wcs_tolerance = wcs_tolerance self._initial_set_mask(mask) self._fill_value = fill_value if header is not None: self._header = header else: self._header = Header() self._spectral_unit = spectral_unit if spectral_unit is None: if 'CUNIT1' in self._header: self._spectral_unit = u.Unit(self._header['CUNIT1']) elif self._wcs is not None: self._spectral_unit = u.Unit(self._wcs.wcs.cunit[0]) return self def __repr__(self): prefixstr = '<' + self.__class__.__name__ + ' ' arrstr = np.array2string(self.filled_data[:].value, separator=',', prefix=prefixstr) return '{0}{1}{2:s}>'.format(prefixstr, arrstr, self._unitstr) @staticmethod def from_hdu(hdu): ''' Return a OneDSpectrum from a FITS HDU or HDU list. ''' if isinstance(hdu, HDUList): hdul = hdu hdu = hdul[0] else: hdul = HDUList([hdu]) if not len(hdu.data.shape) == 1: raise ValueError("HDU must contain one-dimensional data.") meta = {} mywcs = wcs.WCS(hdu.header) if "BUNIT" in hdu.header: unit = cube_utils.convert_bunit(hdu.header["BUNIT"]) meta["BUNIT"] = hdu.header["BUNIT"] else: unit = None with warnings.catch_warnings(): warnings.filterwarnings('ignore', category=FITSWarning) beam = cube_utils.try_load_beams(hdul) if hasattr(beam, '__len__'): beams = beam else: beams = None if beams is not None: self = VaryingResolutionOneDSpectrum(hdu.data, unit=unit, wcs=mywcs, meta=meta, header=hdu.header, beams=beams) else: beam = cube_utils.try_load_beam(hdu.header) self = OneDSpectrum(hdu.data, unit=unit, wcs=mywcs, meta=meta, header=hdu.header, beam=beam) return self @property def header(self): header = super(BaseOneDSpectrum, self).header # Preserve the spectrum's spectral units if 'CUNIT1' in header and self._spectral_unit != u.Unit(header['CUNIT1']): spectral_scale = spectral_axis.wcs_unit_scale(self._spectral_unit) header['CDELT1'] *= spectral_scale header['CRVAL1'] *= spectral_scale header['CUNIT1'] = self.spectral_axis.unit.to_string(format='FITS') return header @property def spectral_axis(self): """ A `~astropy.units.Quantity` array containing the central values of each channel along the spectral axis. """ if self._wcs is None: spec_axis = np.arange(self.size) * u.one else: spec_axis = self.wcs.wcs_pix2world(np.arange(self.size), 0)[0] * \ u.Unit(self.wcs.wcs.cunit[0]) if self._spectral_unit is not None: spec_axis = spec_axis.to(self._spectral_unit) return spec_axis def quicklook(self, filename=None, drawstyle='steps-mid', **kwargs): """ Plot the spectrum with current spectral units in the currently open figure kwargs are passed to `matplotlib.pyplot.plot` Parameters ---------- filename : str or Non Optional - the filename to save the quicklook to. """ from matplotlib import pyplot ax = pyplot.gca() ax.plot(self.spectral_axis, self.filled_data[:].value, drawstyle=drawstyle, **kwargs) ax.set_xlabel(self.spectral_axis.unit.to_string(format='latex')) ax.set_ylabel(self.unit) if filename is not None: pyplot.gcf().savefig(filename) def with_spectral_unit(self, unit, velocity_convention=None, rest_value=None): newwcs, newmeta = self._new_spectral_wcs(unit, velocity_convention=velocity_convention, rest_value=rest_value) newheader = self._nowcs_header.copy() newheader.update(newwcs.to_header()) wcs_cunit = u.Unit(newheader['CUNIT1']) newheader['CUNIT1'] = unit.to_string(format='FITS') newheader['CDELT1'] *= wcs_cunit.to(unit) if self._mask is not None: newmask = self._mask.with_spectral_unit(unit, velocity_convention=velocity_convention, rest_value=rest_value) newmask._wcs = newwcs else: newmask = None return self._new_spectrum_with(wcs=newwcs, spectral_unit=unit, mask=newmask, meta=newmeta, header=newheader) def __getitem__(self, key, **kwargs): # Ideally, this could just be in VaryingResolutionOneDSpectrum, # but it's about the code is about the same length by just # keeping it here. try: kwargs['beams'] = self.beams[key] except (AttributeError, TypeError): pass new_qty = super(BaseOneDSpectrum, self).__getitem__(key) if isinstance(key, slice): new = self.__class__(value=new_qty.value, unit=new_qty.unit, copy=False, wcs=wcs_utils.slice_wcs(self._wcs, key, shape=self.shape), meta=self._meta, mask=(self._mask[key] if self._mask is not nomask else nomask), header=self._header, wcs_tolerance=self._wcs_tolerance, fill_value=self.fill_value, **kwargs) return new else: if self._mask is not nomask: # Kind of a hack; this is probably inefficient bad = self._mask.exclude()[key] if isinstance(bad, da.Array): bad = bad.compute() new_qty[bad] = np.nan return new_qty def __getattribute__(self, attrname): # This is a hack to handle dimensionality-reducing functions # We want spectrum.max() to return a Quantity, not a spectrum # Long-term, we really want `OneDSpectrum` to not inherit from # `Quantity`, but for now this approach works.... we just have # to add more functions to this list. if attrname in ('min', 'max', 'std', 'mean', 'sum', 'cumsum', 'nansum', 'ptp', 'var'): return getattr(self.quantity, attrname) else: return super(BaseOneDSpectrum, self).__getattribute__(attrname) def spectral_interpolate(self, spectral_grid, suppress_smooth_warning=False, fill_value=None): """ Resample the spectrum onto a specific grid Parameters ---------- spectral_grid : array An array of the spectral positions to regrid onto suppress_smooth_warning : bool If disabled, a warning will be raised when interpolating onto a grid that does not nyquist sample the existing grid. Disable this if you have already appropriately smoothed the data. fill_value : float Value for extrapolated spectral values that lie outside of the spectral range defined in the original data. The default is to use the nearest spectral channel in the cube. Returns ------- spectrum : OneDSpectrum """ assert spectral_grid.ndim == 1 inaxis = self.spectral_axis.to(spectral_grid.unit) indiff = np.mean(np.diff(inaxis)) outdiff = np.mean(np.diff(spectral_grid)) # account for reversed axes if outdiff < 0: spectral_grid = spectral_grid[::-1] outdiff = np.mean(np.diff(spectral_grid)) outslice = slice(None, None, -1) else: outslice = slice(None, None, 1) specslice = slice(None) if indiff >= 0 else slice(None, None, -1) inaxis = inaxis[specslice] indiff = np.mean(np.diff(inaxis)) # insanity checks if indiff < 0 or outdiff < 0: raise ValueError("impossible.") assert np.all(np.diff(spectral_grid) > 0) assert np.all(np.diff(inaxis) > 0) np.testing.assert_allclose(np.diff(spectral_grid), outdiff, err_msg="Output grid must be linear") if outdiff > 2 * indiff and not suppress_smooth_warning: warnings.warn("Input grid has too small a spacing. The data should " "be smoothed prior to resampling.", SmoothingWarning ) newspec = np.empty([spectral_grid.size], dtype=self.dtype) newmask = np.empty([spectral_grid.size], dtype='bool') newspec[outslice] = np.interp(spectral_grid.value, inaxis.value, self.filled_data[specslice].value, left=fill_value, right=fill_value) mask = self.mask.include() if all(mask): newmask = np.ones([spectral_grid.size], dtype='bool') else: interped = np.interp(spectral_grid.value, inaxis.value, mask[specslice]) > 0 newmask[outslice] = interped newwcs = self.wcs.deepcopy() newwcs.wcs.crpix[0] = 1 newwcs.wcs.crval[0] = spectral_grid[0].value if outslice.step > 0 \ else spectral_grid[-1].value newwcs.wcs.cunit[0] = spectral_grid.unit.to_string(format='FITS') newwcs.wcs.cdelt[0] = outdiff.value if outslice.step > 0 \ else -outdiff.value newwcs.wcs.set() newheader = self._nowcs_header.copy() newheader.update(newwcs.to_header()) wcs_cunit = u.Unit(newheader['CUNIT1']) newheader['CUNIT1'] = spectral_grid.unit.to_string(format='FITS') newheader['CDELT1'] *= wcs_cunit.to(spectral_grid.unit) newbmask = BooleanArrayMask(newmask, wcs=newwcs) return self._new_spectrum_with(data=newspec, wcs=newwcs, mask=newbmask, header=newheader, spectral_unit=spectral_grid.unit) def spectral_smooth(self, kernel, convolve=convolution.convolve, **kwargs): """ Smooth the spectrum Parameters ---------- kernel : `~astropy.convolution.Kernel1D` A 1D kernel from astropy convolve : function The astropy convolution function to use, either `astropy.convolution.convolve` or `astropy.convolution.convolve_fft` kwargs : dict Passed to the convolve function """ newspec = convolve(self.value, kernel, normalize_kernel=True, **kwargs) return self._new_spectrum_with(data=newspec) def to(self, unit, equivalencies=[]): """ Return a new `~spectral_cube.lower_dimensional_structures.OneDSpectrum` of the same class with the specified unit. See `astropy.units.Quantity.to` for further details. """ return super(BaseOneDSpectrum, self).to(unit, equivalencies, freq=None) def with_fill_value(self, fill_value): """ Create a new :class:`OneDSpectrum` with a different ``fill_value``. """ return self._new_spectrum_with(fill_value=fill_value) @property def _new_thing_with(self): return self._new_spectrum_with def _new_spectrum_with(self, data=None, wcs=None, mask=None, meta=None, fill_value=None, spectral_unit=None, unit=None, header=None, wcs_tolerance=None, **kwargs): data = self._data if data is None else data if unit is None and hasattr(data, 'unit'): if data.unit != self.unit: raise u.UnitsError("New data unit '{0}' does not" " match unit '{1}'. You can" " override this by specifying the" " `unit` keyword." .format(data.unit, self.unit)) unit = data.unit elif unit is None: unit = self.unit elif unit is not None: # convert string units to Units if not isinstance(unit, u.Unit): unit = u.Unit(unit) if hasattr(data, 'unit'): if u.Unit(unit) != data.unit: raise u.UnitsError("The specified new cube unit '{0}' " "does not match the input unit '{1}'." .format(unit, data.unit)) else: data = u.Quantity(data, unit=unit, copy=False) wcs = self._wcs if wcs is None else wcs mask = self._mask if mask is None else mask if meta is None: meta = {} meta.update(self._meta) if unit is not None: meta['BUNIT'] = unit.to_string(format='FITS') fill_value = self._fill_value if fill_value is None else fill_value spectral_unit = self._spectral_unit if spectral_unit is None else u.Unit(spectral_unit) spectrum = self.__class__(value=data, wcs=wcs, mask=mask, meta=meta, unit=unit, fill_value=fill_value, header=header or self._header, wcs_tolerance=wcs_tolerance or self._wcs_tolerance, **kwargs) spectrum._spectral_unit = spectral_unit return spectrum class OneDSpectrum(BaseOneDSpectrum, BeamMixinClass): def __new__(cls, value, beam=None, read_beam=False, **kwargs): self = super(OneDSpectrum, cls).__new__(cls, value, **kwargs) if beam is None: if "beam" in self.meta: beam = self.meta['beam'] elif read_beam: beam = cube_utils.try_load_beam(self.header) if beam is None: warnings.warn("Cannot load beam from header.", BeamWarning ) if beam is not None: self.beam = beam self.meta['beam'] = beam self._cache = {} return self def _new_spectrum_with(self, **kwargs): beam = kwargs.pop('beam', None) if 'beam' in self._meta and beam is None: beam = self.beam out = super(OneDSpectrum, self)._new_spectrum_with(beam=beam, **kwargs) return out def with_beam(self, beam, raise_error_jybm=True): ''' Attach a new beam object to the OneDSpectrum. Parameters ---------- beam : `~radio_beam.Beam` A new beam object. ''' if not isinstance(beam, Beam): raise TypeError("beam must be a radio_beam.Beam object.") self.check_jybeam_smoothing(raise_error_jybm=raise_error_jybm) meta = self.meta.copy() meta['beam'] = beam return self._new_spectrum_with(beam=beam, meta=meta) class VaryingResolutionOneDSpectrum(BaseOneDSpectrum, MultiBeamMixinClass): def __new__(cls, value, beams=None, read_beam=False, goodbeams_mask=None, **kwargs): self = super(VaryingResolutionOneDSpectrum, cls).__new__(cls, value, **kwargs) assert hasattr(self, '_fill_value') if beams is None: if "beams" in self.meta: beams = self.meta['beams'] elif read_beam: beams = cube_utils.try_load_beams(self.header) if beams is None: warnings.warn("Cannot load beams table from header.", BeamWarning ) if beams is not None: if isinstance(beams, BinTableHDU): beam_data_table = beams.data elif isinstance(beams, FITS_rec): beam_data_table = beams else: beam_data_table = None if beam_data_table is not None: beams = Beams(major=u.Quantity(beam_data_table['BMAJ'], u.arcsec), minor=u.Quantity(beam_data_table['BMIN'], u.arcsec), pa=u.Quantity(beam_data_table['BPA'], u.deg), meta=[{key: row[key] for key in beam_data_table.names if key not in ('BMAJ','BPA', 'BMIN')} for row in beam_data_table],) self.beams = beams self.meta['beams'] = beams if goodbeams_mask is not None: self.goodbeams_mask = goodbeams_mask self._cache = {} return self @property def hdu(self): warnings.warn("There are multiple beams for this spectrum that " "are being ignored when creating the HDU.", BeamWarning ) return super(VaryingResolutionOneDSpectrum, self).hdu @property def hdulist(self): with warnings.catch_warnings(): warnings.simplefilter("ignore") hdu = self.hdu beamhdu = cube_utils.beams_to_bintable(self.beams) return HDUList([hdu, beamhdu]) def _new_spectrum_with(self, **kwargs): beams = kwargs.pop('beams', self.beams) if beams is None: beams = self.beams VRODS = VaryingResolutionOneDSpectrum out = super(VRODS, self)._new_spectrum_with(beams=beams, **kwargs) return out def __array_finalize__(self, obj): super(VaryingResolutionOneDSpectrum, self).__array_finalize__(obj) self._beams = getattr(obj, '_beams', None) if getattr(obj, 'goodbeams_mask', None) is not None: # do NOT use the setter here, because we sometimes need to write # intermediate size-mismatch things that later get fixed, e.g., in # __getitem__ below self._goodbeams_mask = getattr(obj, 'goodbeams_mask', None) def __getitem__(self, key): new_qty = super(VaryingResolutionOneDSpectrum, self).__getitem__(key) # use the goodbeams_mask setter here because it checks size new_qty.goodbeams_mask = self.goodbeams_mask[key] new_qty.beams = self.unmasked_beams[key] return new_qty ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1633017852.0 spectral-cube-0.6.0/spectral_cube/masks.py0000644000175100001710000007406200000000000020122 0ustar00runnerdockerfrom __future__ import print_function, absolute_import, division import abc import uuid import warnings import tempfile from six.moves import zip import numpy as np from numpy.lib.stride_tricks import as_strided import dask.array as da from astropy.wcs import InconsistentAxisTypesError from astropy.io import fits from . import wcs_utils from .utils import WCSWarning __all__ = ['MaskBase', 'InvertedMask', 'CompositeMask', 'BooleanArrayMask', 'LazyMask', 'LazyComparisonMask', 'FunctionMask'] # Global version of the with_spectral_unit docs to avoid duplicating them with_spectral_unit_docs = """ Parameters ---------- unit : u.Unit Any valid spectral unit: velocity, (wave)length, or frequency. Only vacuum units are supported. velocity_convention : u.doppler_relativistic, u.doppler_radio, or u.doppler_optical The velocity convention to use for the output velocity axis. Required if the output type is velocity. rest_value : u.Quantity A rest wavelength or frequency with appropriate units. Required if output type is velocity. The cube's WCS should include this already if the *input* type is velocity, but the WCS's rest wavelength/frequency can be overridden with this parameter. """ def is_broadcastable_and_smaller(shp1, shp2): """ Test if shape 1 can be broadcast to shape 2, not allowing the case where shape 2 has a dimension length 1 """ for a, b in zip(shp1[::-1], shp2[::-1]): # b==1 is broadcastable but not desired if a == 1 or a == b: pass else: return False return True def dims_to_skip(shp1, shp2): """ For a shape `shp1` that is broadcastable to shape `shp2`, specify which dimensions are length 1. Parameters ---------- keep : bool If True, return the dimensions to keep rather than those to remove """ if not is_broadcastable_and_smaller(shp1, shp2): raise ValueError("Cannot broadcast {0} to {1}".format(shp1,shp2)) dims = [] for ii,(a, b) in enumerate(zip(shp1[::-1], shp2[::-1])): # b==1 is broadcastable but not desired if a == 1: dims.append(len(shp2) - ii - 1) elif a == b: pass else: raise ValueError("This should not be possible") if len(shp1) < len(shp2): dims += list(range(len(shp2)-len(shp1))) return dims def view_of_subset(shp1, shp2, view): """ Given two shapes and a view, assuming that shape 1 can be broadcast to shape 2, return the sub-view that applies to shape 1 """ # if the view is 1-dimensional, we can't subset it if not hasattr(view, '__len__'): return view dts = dims_to_skip(shp1, shp2) if view: cv_view = [x for ii,x in enumerate(view) if ii not in dts] else: # if no view is specified, still need to slice cv_view = [x for ii,x in enumerate([slice(None)]*3) if ii not in dts] # return type matters # array[[0,0]] = [array[0], array[0]] # array[(0,0)] = array[0,0] return tuple(cv_view) class MaskBase(object): __metaclass__ = abc.ABCMeta def include(self, data=None, wcs=None, view=(), **kwargs): """ Return a boolean array indicating which values should be included. If ``view`` is passed, only the sliced mask will be returned, which avoids having to load the whole mask in memory. Otherwise, the whole mask is returned in-memory. kwargs are passed to _validate_wcs """ self._validate_wcs(data, wcs, **kwargs) return self._include(data=data, wcs=wcs, view=view) # Commented out, but left as a possibility, because including this didn't fix any # of the problems we encountered with matplotlib plotting def view(self, view=()): """ Compatibility tool: if a numpy.ma.ufunc is run on the mask, it will try to grab a view of the mask, which needs to appear to numpy as a true array. This can be important for, e.g., plotting. Numpy's convention is that masked=True means "masked out" .. note:: I don't know if there are broader concerns or consequences from including this 'view' tool here. """ return self.exclude(view=view) def _validate_wcs(self, new_data=None, new_wcs=None, **kwargs): """ This method can be overridden in cases where the data and WCS have to conform to some rules. This gets called automatically when ``include`` or ``exclude`` are called. """ pass @abc.abstractmethod def _include(self, data=None, wcs=None, view=()): pass def exclude(self, data=None, wcs=None, view=(), **kwargs): """ Return a boolean array indicating which values should be excluded. If ``view`` is passed, only the sliced mask will be returned, which avoids having to load the whole mask in memory. Otherwise, the whole mask is returned in-memory. kwargs are passed to _validate_wcs """ self._validate_wcs(data, wcs, **kwargs) return self._exclude(data=data, wcs=wcs, view=view) def _exclude(self, data=None, wcs=None, view=()): return np.logical_not(self._include(data=data, wcs=wcs, view=view)) def any(self): return np.any(self.exclude()) def _flattened(self, data, wcs=None, view=()): """ Return a flattened array of the included elements of cube Parameters ---------- data : array-like The data array to flatten view : tuple, optional Any slicing to apply to the data before flattening Returns ------- flat_array : `~numpy.ndarray` A 1-D ndarray containing the flattened output Notes ----- This is an internal method used by :class:`SpectralCube`. """ mask = self.include(data=data, wcs=wcs, view=view) # Workaround for https://github.com/dask/dask/issues/6089 if isinstance(data, da.Array) and not isinstance(mask, da.Array): mask = da.asarray(mask, name=str(uuid.uuid4())) # if not isinstance(data, da.Array) and isinstance(mask, da.Array): # mask = mask.compute() return data[view][mask] def _filled(self, data, wcs=None, fill=np.nan, view=(), use_memmap=False, **kwargs): """ Replace the excluded elements of *array* with *fill*. Parameters ---------- data : array-like Input array fill : number Replacement value view : tuple, optional Any slicing to apply to the data before flattening use_memmap : bool Use a memory map to store the output data? Returns ------- filled_array : `~numpy.ndarray` A 1-D ndarray containing the filled output Notes ----- This is an internal method used by :class:`SpectralCube`. Users should use the property :meth:`MaskBase.filled_data` """ # Must convert to floating point, but should not change from inherited # type otherwise dt = np.find_common_type([data.dtype], [float]) if use_memmap and data.size > 0: ntf = tempfile.NamedTemporaryFile() sliced_data = np.memmap(ntf, mode='w+', shape=data[view].shape, dtype=dt) sliced_data[:] = data[view] else: sliced_data = data[view].astype(dt) ex = self.exclude(data=data, wcs=wcs, view=view, **kwargs) return np.ma.masked_array(sliced_data, mask=ex).filled(fill) def __and__(self, other): return CompositeMask(self, other, operation='and') def __or__(self, other): return CompositeMask(self, other, operation='or') def __xor__(self, other): return CompositeMask(self, other, operation='xor') def __invert__(self): return InvertedMask(self) @property def shape(self): raise NotImplementedError("{0} mask classes do not have shape attributes." .format(self.__class__.__name__)) @property def ndim(self): return len(self.shape) @property def size(self): return np.product(self.shape) @property def dtype(self): return np.dtype('bool') def __getitem__(self): raise NotImplementedError("Slicing not supported by mask class {0}" .format(self.__class__.__name__)) def quicklook(self, view, wcs=None, filename=None, use_aplpy=True, aplpy_kwargs={}): ''' View a 2D slice of the mask, specified by view. Parameters ---------- view : tuple Slicing to apply to the mask. Must return a 2D slice. wcs : astropy.wcs.WCS, optional WCS object to use in plotting the mask slice. filename : str, optional Filename of the output image. Enables saving of the plot. use_aplpy : bool, optional Try plotting with the aplpy package aplpy_kwargs : dict, optional kwargs passed to `~aplpy.FITSFigure`. ''' view_twod = self.include(view=view, wcs=wcs) if use_aplpy: if wcs is not None: hdu = fits.PrimaryHDU(view_twod.astype(int), wcs.to_header()) else: hdu = fits.PrimaryHDU(view_twod.astype(int)) try: import aplpy FITSFigure = aplpy.FITSFigure(hdu, **aplpy_kwargs) FITSFigure.show_grayscale() FITSFigure.add_colorbar() if filename is not None: FITSFigure.save(filename) except (InconsistentAxisTypesError, ImportError): use_aplpy = True if not use_aplpy: from matplotlib import pyplot figure = pyplot.imshow(view_twod) if filename is not None: figure.savefig(filename) def _get_new_wcs(self, unit, velocity_convention=None, rest_value=None): """ Returns a new WCS with a different Spectral Axis unit """ from .spectral_axis import convert_spectral_axis,determine_ctype_from_vconv out_ctype = determine_ctype_from_vconv(self._wcs.wcs.ctype[self._wcs.wcs.spec], unit, velocity_convention=velocity_convention) newwcs = convert_spectral_axis(self._wcs, unit, out_ctype, rest_value=rest_value) newwcs.wcs.set() return newwcs _get_new_wcs.__doc__ += with_spectral_unit_docs class InvertedMask(MaskBase): def __init__(self, mask): self._mask = mask @property def shape(self): return self._mask.shape def _include(self, data=None, wcs=None, view=()): return np.logical_not(self._mask.include(data=data, wcs=wcs, view=view)) def __getitem__(self, view): return InvertedMask(self._mask[view]) def with_spectral_unit(self, unit, velocity_convention=None, rest_value=None): """ Get an InvertedMask copy with a WCS in the modified unit """ newmask = self._mask.with_spectral_unit(unit, velocity_convention=velocity_convention, rest_value=rest_value) return InvertedMask(newmask) with_spectral_unit.__doc__ += with_spectral_unit_docs class CompositeMask(MaskBase): """ A combination of several masks. The included masks are treated with the specified operation. Parameters ---------- mask1, mask2 : Masks The two masks to composite operation : str Either 'and' or 'or'; the operation used to combine the masks """ def __init__(self, mask1, mask2, operation='and'): if isinstance(mask1, np.ndarray) and isinstance(mask2, MaskBase) and hasattr(mask2, 'shape'): if not is_broadcastable_and_smaller(mask1.shape, mask2.shape): raise ValueError("Mask1 shape is not broadcastable to Mask2 shape: " "%s vs %s" % (mask1.shape, mask2.shape)) mask1 = BooleanArrayMask(mask1, mask2._wcs, shape=mask2.shape) elif isinstance(mask2, np.ndarray) and isinstance(mask1, MaskBase) and hasattr(mask1, 'shape'): if not is_broadcastable_and_smaller(mask2.shape, mask1.shape): raise ValueError("Mask2 shape is not broadcastable to Mask1 shape: " "%s vs %s" % (mask2.shape, mask1.shape)) mask2 = BooleanArrayMask(mask2, mask1._wcs, shape=mask1.shape) # both entries must have compatible, which effectively means # equal, WCSes. Unless one is a function. if hasattr(mask1, '_wcs') and hasattr(mask2, '_wcs'): mask1._validate_wcs(new_data=None, wcs=mask2._wcs) # In order to composite composites, they must have a _wcs defined. # (maybe this should be a property?) self._wcs = mask1._wcs elif hasattr(mask1, '_wcs'): # if one mask doesn't have a WCS, but the other does, the # compositemask should have the same WCS as the one that does self._wcs = mask1._wcs elif hasattr(mask2, '_wcs'): self._wcs = mask2._wcs self._mask1 = mask1 self._mask2 = mask2 self._operation = operation def _validate_wcs(self, new_data=None, new_wcs=None, **kwargs): self._mask1._validate_wcs(new_data=new_data, new_wcs=new_wcs, **kwargs) self._mask2._validate_wcs(new_data=new_data, new_wcs=new_wcs, **kwargs) @property def shape(self): try: assert self._mask1.shape == self._mask2.shape return self._mask1.shape except AssertionError: raise ValueError("The composite mask does not have a well-defined " "shape; its two components have shapes {0} and " "{1}.".format(self._mask1.shape, self._mask2.shape)) except NotImplementedError: raise ValueError("The composite mask contains at least one " "component with no defined shape.") def _include(self, data=None, wcs=None, view=()): result_mask_1 = self._mask1._include(data=data, wcs=wcs, view=view) result_mask_2 = self._mask2._include(data=data, wcs=wcs, view=view) if self._operation == 'and': return np.bitwise_and(result_mask_1, result_mask_2) elif self._operation == 'or': return np.bitwise_or(result_mask_1, result_mask_2) elif self._operation == 'xor': return np.bitwise_xor(result_mask_1, result_mask_2) else: raise ValueError("Operation '{0}' not supported".format(self._operation)) def __getitem__(self, view): return CompositeMask(self._mask1[view], self._mask2[view], operation=self._operation) def with_spectral_unit(self, unit, velocity_convention=None, rest_value=None): """ Get a CompositeMask copy in which each component has a WCS in the modified unit """ newmask1 = self._mask1.with_spectral_unit(unit, velocity_convention=velocity_convention, rest_value=rest_value) newmask2 = self._mask2.with_spectral_unit(unit, velocity_convention=velocity_convention, rest_value=rest_value) return CompositeMask(newmask1, newmask2, self._operation) with_spectral_unit.__doc__ += with_spectral_unit_docs class BooleanArrayMask(MaskBase): """ A mask defined as an array on a spectral cube WCS Parameters ---------- mask: `numpy.ndarray` A boolean numpy ndarray wcs: `astropy.wcs.WCS` The WCS object shape: tuple The shape of the region the array is masking. This is *required* if ``mask.ndim != data.ndim`` to provide rules for how to broadcast the mask """ def __init__(self, mask, wcs, shape=None, include=True): self._mask_type = 'include' if include else 'exclude' self._wcs = wcs self._wcs_whitelist = set() #if mask.ndim != 3 and (shape is None or len(shape) != 3): # raise ValueError("When creating a BooleanArrayMask with <3 dimensions, " # "the shape of the 3D array must be specified.") if shape is not None and not is_broadcastable_and_smaller(mask.shape, shape): raise ValueError("Mask cannot be broadcast to the specified shape.") self._shape = shape or mask.shape self._mask = mask """ Developer note (AG): The logic in this following section seems overly complicated. All of it is added to make sure that a 1D boolean array along the spectral axis can be created. I thought this was possible previously, but experience many errors in my latest attempt to use one. """ # If a shape is given, we may need to broadcast to that shape if shape is not None: # these are dimensions that simply don't exist n_empty_dims = (len(self._shape)-mask.ndim) # these are dimensions of shape 1 that would be squeezed away but may # be needed to make the arrays broadcastable (e.g., mask[:,None,None]) # Need to add n_empty_dims because (1,2) will broadcast to (3,1,2) # and there will be no extra dims. extra_dims = [ii for ii,(sh1,sh2) in enumerate(zip((0,)*n_empty_dims + mask.shape, shape)) if sh1 == 1 and sh1 != sh2] # Add the [None,]'s and the nonexistant n_extra_dims = n_empty_dims + len(extra_dims) # if there are no extra dims, we're done, the original shape is fine if n_extra_dims > 0: strides = (0,)*n_empty_dims + mask.strides for ed in extra_dims: # all of the [None,] dims should have 0 stride assert strides[ed] == 0,"Stride shape failure" self._mask = as_strided(mask, shape=self.shape, strides=strides) # Make sure the mask shape matches the Mask object shape assert self._mask.shape == self.shape,"Shape initialization failure" def _validate_wcs(self, new_data=None, new_wcs=None, **kwargs): """ Check that the new WCS matches the current one Parameters ---------- kwargs : dict Passed to `wcs_utils.check_equality` """ if new_data is not None and not is_broadcastable_and_smaller(self._mask.shape, new_data.shape): raise ValueError("data shape cannot be broadcast to match mask shape") if new_wcs is not None: if new_wcs not in self._wcs_whitelist: try: if not wcs_utils.check_equality(new_wcs, self._wcs, warn_missing=True, **kwargs): raise ValueError("WCS does not match mask WCS") else: self._wcs_whitelist.add(new_wcs) except InconsistentAxisTypesError: warnings.warn("Inconsistent axis type encountered; WCS is " "invalid and therefore will not be checked " "against other WCSes.", WCSWarning ) self._wcs_whitelist.add(new_wcs) def _include(self, data=None, wcs=None, view=()): result_mask = self._mask[view] return result_mask if self._mask_type == 'include' else np.logical_not(result_mask) def _exclude(self, data=None, wcs=None, view=()): result_mask = self._mask[view] return result_mask if self._mask_type == 'exclude' else np.logical_not(result_mask) @property def shape(self): return self._shape def __getitem__(self, view): return BooleanArrayMask(self._mask[view], wcs_utils.slice_wcs(self._wcs, view, shape=self.shape, drop_degenerate=True), shape=self._mask[view].shape) def with_spectral_unit(self, unit, velocity_convention=None, rest_value=None): """ Get a BooleanArrayMask copy with a WCS in the modified unit """ newwcs = self._get_new_wcs(unit, velocity_convention, rest_value) newmask = BooleanArrayMask(self._mask, newwcs, include=self._mask_type=='include') return newmask with_spectral_unit.__doc__ += with_spectral_unit_docs class LazyMask(MaskBase): """ A boolean mask defined by the evaluation of a function on a fixed dataset. This is conceptually identical to a fixed boolean mask as in :class:`BooleanArrayMask` but defers the evaluation of the mask until it is needed. Parameters ---------- function : callable The function to apply to ``data``. This method should accept a numpy array, which will be a subset of the data array passed to __init__. It should return a boolean array, where `True` values indicate that which pixels are valid/unaffected by masking. data : array-like The array to evaluate ``function`` on. This should support Numpy-like slicing syntax. wcs : `~astropy.wcs.WCS` The WCS of the input data, which is used to define the coordinates for which the boolean mask is defined. """ def __init__(self, function, cube=None, data=None, wcs=None): self._function = function if cube is not None and (data is not None or wcs is not None): raise ValueError("Pass only cube or (data & wcs)") elif cube is not None: self._data = cube._data self._wcs = cube._wcs elif data is not None and wcs is not None: self._data = data self._wcs = wcs else: raise ValueError("Either a cube or (data & wcs) is required.") self._wcs_whitelist = set() @property def shape(self): return self._data.shape def _validate_wcs(self, new_data=None, new_wcs=None, **kwargs): """ Check that the new WCS matches the current one Parameters ---------- kwargs : dict Passed to `wcs_utils.check_equality` """ if new_data is not None: if not is_broadcastable_and_smaller(new_data.shape, self._data.shape): raise ValueError("data shape cannot be broadcast to match mask shape") if new_wcs is not None: if new_wcs not in self._wcs_whitelist: if not wcs_utils.check_equality(new_wcs, self._wcs, warn_missing=True, **kwargs): raise ValueError("WCS does not match mask WCS") else: self._wcs_whitelist.add(new_wcs) def _include(self, data=None, wcs=None, view=()): self._validate_wcs(data, wcs) return self._function(self._data[view]) def __getitem__(self, view): return LazyMask(self._function, data=self._data[view], wcs=wcs_utils.slice_wcs(self._wcs, view, shape=self._data.shape, drop_degenerate=True)) def with_spectral_unit(self, unit, velocity_convention=None, rest_value=None): """ Get a LazyMask copy with a WCS in the modified unit """ newwcs = self._get_new_wcs(unit, velocity_convention, rest_value) newmask = LazyMask(self._function, data=self._data, wcs=newwcs) return newmask with_spectral_unit.__doc__ += with_spectral_unit_docs class LazyComparisonMask(LazyMask): """ A boolean mask defined by the evaluation of a comparison function between a fixed dataset and some other value. This is conceptually similar to the :class:`LazyMask` but it will ensure that the comparison value can be compared to the data Parameters ---------- function : callable The function to apply to ``data``. This method should accept a numpy array, which will be the data array passed to __init__, and a second argument also passed to __init__. It should return a boolean array, where `True` values indicate that which pixels are valid/unaffected by masking. comparison_value : float or array The comparison value for the array data : array-like The array to evaluate ``function`` on. This should support Numpy-like slicing syntax. wcs : `~astropy.wcs.WCS` The WCS of the input data, which is used to define the coordinates for which the boolean mask is defined. """ def __init__(self, function, comparison_value, cube=None, data=None, wcs=None): self._function = function if cube is not None and (data is not None or wcs is not None): raise ValueError("Pass only cube or (data & wcs)") elif cube is not None: self._data = cube._data self._wcs = cube._wcs elif data is not None and wcs is not None: self._data = data self._wcs = wcs else: raise ValueError("Either a cube or (data & wcs) is required.") if (hasattr(comparison_value, 'shape') and not is_broadcastable_and_smaller(self._data.shape, comparison_value.shape)): raise ValueError("The data and the comparison value cannot " "be broadcast to match shape") self._comparison_value = comparison_value self._wcs_whitelist = set() def _include(self, data=None, wcs=None, view=()): self._validate_wcs(data, wcs) if hasattr(self._comparison_value, 'shape') and self._comparison_value.shape: cv_view = view_of_subset(self._comparison_value.shape, self._data.shape, view) return self._function(self._data[view], self._comparison_value[cv_view]) else: return self._function(self._data[view], self._comparison_value) def __getitem__(self, view): if hasattr(self._comparison_value, 'shape') and self._comparison_value.shape: cv_view = view_of_subset(self._comparison_value.shape, self._data.shape, view) return LazyComparisonMask(self._function, data=self._data[view], comparison_value=self._comparison_value[cv_view], wcs=wcs_utils.slice_wcs(self._wcs, view, drop_degenerate=True)) else: return LazyComparisonMask(self._function, data=self._data[view], comparison_value=self._comparison_value, wcs=wcs_utils.slice_wcs(self._wcs, view, drop_degenerate=True)) def with_spectral_unit(self, unit, velocity_convention=None, rest_value=None): """ Get a LazyComparisonMask copy with a WCS in the modified unit """ newwcs = self._get_new_wcs(unit, velocity_convention, rest_value) newmask = LazyComparisonMask(self._function, data=self._data, comparison_value=self._comparison_value, wcs=newwcs) return newmask class FunctionMask(MaskBase): """ A mask defined by a function that is evaluated at run-time using the data passed to the mask. This function differs from :class:`LazyMask` in the arguments which are passed to the function. FunctionMasks receive an array, wcs object, and view, whereas LazyMasks receive pre-sliced views into an array specified at mask-creation time. Parameters ---------- function : callable The function to evaluate the mask. The call signature should be ``function(data, wcs, slice)`` where ``data`` and ``wcs`` are the arguments that get passed to e.g. ``include``, ``exclude``, ``_filled``, and ``_flattened``. The function should return a boolean array, where `True` values indicate that which pixels are valid / unaffected by masking. """ def __init__(self, function): self._function = function def _validate_wcs(self, new_data=None, new_wcs=None, **kwargs): pass def _include(self, data=None, wcs=None, view=()): result = self._function(data, wcs, view) if result.shape != data[view].shape: raise ValueError("Function did not return mask with correct shape - expected {0}, got {1}".format(data[view].shape, result.shape)) return result def __getitem__(self, slice): return self def with_spectral_unit(self, unit, velocity_convention=None, rest_value=None): """ Functional masks do not have WCS defined, so this simply returns a copy of the current mask in order to be consistent with ``with_spectral_unit`` from other Masks """ return FunctionMask(self._function) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1633017852.0 spectral-cube-0.6.0/spectral_cube/np_compat.py0000644000175100001710000000205300000000000020753 0ustar00runnerdockerfrom __future__ import print_function, absolute_import, division import numpy as np from distutils.version import LooseVersion def allbadtonan(function): """ Wrapper of numpy's nansum etc.: for <=1.8, just return the function's results. For >=1.9, any axes with all-nan values will have all-nan outputs in the collapsed version """ def f(data, axis=None, keepdims=None): if keepdims is None: result = function(data, axis=axis) else: result = function(data, axis=axis, keepdims=keepdims) if LooseVersion(np.__version__) >= LooseVersion('1.9.0') and hasattr(result, '__len__'): if axis is None: if np.all(np.isnan(data)): return np.nan else: return result if keepdims is None: nans = np.all(np.isnan(data), axis=axis) else: nans = np.all(np.isnan(data), axis=axis, keepdims=keepdims) result[nans] = np.nan return result return f ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1633017852.0 spectral-cube-0.6.0/spectral_cube/spectral_axis.py0000644000175100001710000004317300000000000021644 0ustar00runnerdockerfrom __future__ import print_function, absolute_import, division import numpy as np from astropy import wcs from astropy import units as u from astropy import constants import warnings from .utils import ExperimentalImplementationWarning def _parse_velocity_convention(vc): if vc in (u.doppler_radio, 'radio', 'RADIO', 'VRAD', 'F', 'FREQ'): return u.doppler_radio elif vc in (u.doppler_optical, 'optical', 'OPTICAL', 'VOPT', 'W', 'WAVE'): return u.doppler_optical elif vc in (u.doppler_relativistic, 'relativistic', 'RELATIVE', 'VREL', 'speed', 'V', 'VELO'): return u.doppler_relativistic # These are the only linear transformations allowed LINEAR_CTYPES = {u.doppler_optical: 'VOPT', u.doppler_radio: 'VRAD', u.doppler_relativistic: 'VELO'} LINEAR_CTYPE_CHARS = {u.doppler_optical: 'W', u.doppler_radio: 'F', u.doppler_relativistic: 'V'} ALL_CTYPES = {'speed': LINEAR_CTYPES, 'frequency': 'FREQ', 'length': 'WAVE'} CTYPE_TO_PHYSICALTYPE = {'WAVE': 'length', 'AIR': 'air wavelength', 'AWAV': 'air wavelength', 'FREQ': 'frequency', 'VELO': 'speed', 'VRAD': 'speed', 'VOPT': 'speed', } CTYPE_CHAR_TO_PHYSICALTYPE = {'W': 'length', 'A': 'air wavelength', 'F': 'frequency', 'V': 'speed'} CTYPE_TO_PHYSICALTYPE.update(CTYPE_CHAR_TO_PHYSICALTYPE) PHYSICAL_TYPE_TO_CTYPE = dict([(v,k) for k,v in CTYPE_CHAR_TO_PHYSICALTYPE.items()]) PHYSICAL_TYPE_TO_CHAR = {'speed': 'V', 'frequency': 'F', 'length': 'W'} # Used to indicate the intial / final sampling system WCS_UNIT_DICT = {'F': u.Hz, 'W': u.m, 'V': u.m/u.s} PHYS_UNIT_DICT = {'length': u.m, 'frequency': u.Hz, 'speed': u.m/u.s} LINEAR_CUNIT_DICT = {'VRAD': u.Hz, 'VOPT': u.m, 'FREQ': u.Hz, 'WAVE': u.m, 'VELO': u.m/u.s, 'AWAV': u.m} LINEAR_CUNIT_DICT.update(WCS_UNIT_DICT) def unit_from_header(header, spectral_axis_number=3): """ Retrieve the spectral unit from a header """ cunitind = 'CUNIT{0}'.format(spectral_axis_number) if cunitind in header: return u.Unit(header[cunitind]) def wcs_unit_scale(unit): """ Determine the appropriate scaling factor to get to the equivalent WCS unit """ for wu in WCS_UNIT_DICT.values(): if wu.is_equivalent(unit): return wu.to(unit) def parse_phys_type(unit): ''' As of astropy 4.3.dev1499+g5b09f9dd9, the physical type of a speed is now "speed/velocity". This is to parse those types and return "speed" that works with our dictionary defintions, and will also continue to work with previous astropy versions. ''' return 'speed' if 'speed' in str(unit.physical_type) else str(unit.physical_type) def determine_vconv_from_ctype(ctype): """ Given a CTYPE, say what velocity convention it is associated with, i.e. what unit the velocity is linearly proportional to Parameters ---------- ctype : str The spectral CTYPE """ if len(ctype) < 5: return _parse_velocity_convention(ctype) elif len(ctype) == 8: return _parse_velocity_convention(ctype[7]) else: raise ValueError("A valid ctype must either have 4 or 8 characters.") def determine_ctype_from_vconv(ctype, unit, velocity_convention=None): """ Given a CTYPE describing the current WCS and an output unit and velocity convention, determine the appropriate output CTYPE Examples -------- >>> determine_ctype_from_vconv('VELO-F2V', u.Hz) 'FREQ' >>> determine_ctype_from_vconv('VELO-F2V', u.m) 'WAVE-F2W' >>> determine_ctype_from_vconv('FREQ', u.m/u.s) # doctest: +SKIP ... ValueError: A velocity convention must be specified >>> determine_ctype_from_vconv('FREQ', u.m/u.s, velocity_convention=u.doppler_radio) 'VRAD' >>> determine_ctype_from_vconv('FREQ', u.m/u.s, velocity_convention=u.doppler_optical) 'VOPT-F2W' >>> determine_ctype_from_vconv('FREQ', u.m/u.s, velocity_convention=u.doppler_relativistic) 'VELO-F2V' """ unit = u.Unit(unit) if len(ctype) > 4: in_physchar = ctype[5] else: lin_cunit = LINEAR_CUNIT_DICT[ctype] in_physchar = PHYSICAL_TYPE_TO_CHAR[parse_phys_type(lin_cunit)] if parse_phys_type(unit) == 'speed': if velocity_convention is None and ctype[0] == 'V': # Special case: velocity <-> velocity doesn't care about convention return ctype elif velocity_convention is None: raise ValueError('A velocity convention must be specified') vcin = _parse_velocity_convention(ctype[:4]) vcout = _parse_velocity_convention(velocity_convention) if vcin == vcout: return LINEAR_CTYPES[vcout] else: return "{type}-{s1}2{s2}".format(type=LINEAR_CTYPES[vcout], s1=in_physchar, s2=LINEAR_CTYPE_CHARS[vcout]) else: in_phystype = CTYPE_TO_PHYSICALTYPE[in_physchar] if in_phystype == parse_phys_type(unit): # Linear case return ALL_CTYPES[in_phystype] else: # Nonlinear case out_physchar = PHYSICAL_TYPE_TO_CTYPE[parse_phys_type(unit)] return "{type}-{s1}2{s2}".format(type=ALL_CTYPES[parse_phys_type(unit)], s1=in_physchar, s2=out_physchar) def get_rest_value_from_wcs(mywcs): if mywcs.wcs.restfrq: ref_value = mywcs.wcs.restfrq*u.Hz return ref_value elif mywcs.wcs.restwav: ref_value = mywcs.wcs.restwav*u.m return ref_value # Velocity/frequency equivalencies that are not present in astropy core. # Ref: https://casa.nrao.edu/casadocs/casa-5-1.2/reference-material/spectral-frames def doppler_z(restfreq): restfreq = restfreq.to_value("GHz") return [(u.GHz, u.km / u.s, lambda x: (restfreq - x) / x, lambda x: restfreq / (1 + x) )] def doppler_beta(restfreq): restfreq = restfreq.to_value("GHz") return [(u.GHz, u.km / u.s, lambda x: constants.si.c.to_value('km/s') * ((1 - ((x / restfreq) ** 2)) / (1 + ((x / restfreq) ** 2))), lambda x: restfreq * np.sqrt((constants.si.c.to_value("km/s") - x) / (x + constants.si.c.to_value("km/s"))) )] def doppler_gamma(restfreq): restfreq = restfreq.to_value("GHz") return [(u.GHz, u.km / u.s, lambda x: constants.si.c.to_value("km/s") * ((1 + (x / restfreq) ** 2) / (2 * x / restfreq)), lambda x: restfreq * (x / constants.si.c.to_value("km/s") + np.sqrt((x / constants.si.c.to_value("km/s")) ** 2 - 1)) )] def convert_spectral_axis(mywcs, outunit, out_ctype, rest_value=None): """ Convert a spectral axis from its unit to a specified out unit with a given output ctype Only VACUUM units are supported (not air) Process: 1. Convert the input unit to its equivalent linear unit 2. Convert the input linear unit to the output linear unit 3. Convert the output linear unit to the output unit """ # If the WCS includes a rest frequency/wavelength, convert it to frequency # or wavelength first. This allows the possibility of changing the rest # frequency wcs_rv = get_rest_value_from_wcs(mywcs) inunit = u.Unit(mywcs.wcs.cunit[mywcs.wcs.spec]) outunit = u.Unit(outunit) # If wcs_rv is set and speed -> speed, then we're changing the reference # location and we need to convert to meters or Hz first if ((parse_phys_type(inunit) == 'speed' and parse_phys_type(outunit) == 'speed' and wcs_rv is not None)): mywcs = convert_spectral_axis(mywcs, wcs_rv.unit, ALL_CTYPES[parse_phys_type(wcs_rv.unit)], rest_value=wcs_rv) inunit = u.Unit(mywcs.wcs.cunit[mywcs.wcs.spec]) elif (parse_phys_type(inunit) == 'speed' and parse_phys_type(outunit) == 'speed' and wcs_rv is None): # If there is no reference change, we want an identical WCS, since # WCS doesn't know about units *at all* newwcs = mywcs.deepcopy() return newwcs #crval_out = (mywcs.wcs.crval[mywcs.wcs.spec] * inunit).to(outunit) #cdelt_out = (mywcs.wcs.cdelt[mywcs.wcs.spec] * inunit).to(outunit) #newwcs.wcs.cdelt[newwcs.wcs.spec] = cdelt_out.value #newwcs.wcs.cunit[newwcs.wcs.spec] = cdelt_out.unit.to_string(format='fits') #newwcs.wcs.crval[newwcs.wcs.spec] = crval_out.value #newwcs.wcs.ctype[newwcs.wcs.spec] = out_ctype #return newwcs in_spec_ctype = mywcs.wcs.ctype[mywcs.wcs.spec] # Check whether we need to convert the rest value first ref_value = None if 'speed' in parse_phys_type(outunit): if rest_value is None: rest_value = wcs_rv if rest_value is None: raise ValueError("If converting from wavelength/frequency to speed, " "a reference wavelength/frequency is required.") ref_value = rest_value.to(u.Hz, u.spectral()) elif 'speed' in parse_phys_type(inunit): # The rest frequency and wavelength should be equivalent if rest_value is not None: ref_value = rest_value elif wcs_rv is not None: ref_value = wcs_rv else: raise ValueError("If converting from speed to wavelength/frequency, " "a reference wavelength/frequency is required.") # If the input unit is not linearly sampled, its linear equivalent will be # the 8th character in the ctype, and the linearly-sampled ctype will be # the 6th character # e.g.: VOPT-F2V lin_ctype = (in_spec_ctype[7] if len(in_spec_ctype) > 4 else in_spec_ctype[:4]) lin_cunit = (LINEAR_CUNIT_DICT[lin_ctype] if lin_ctype in LINEAR_CUNIT_DICT else mywcs.wcs.cunit[mywcs.wcs.spec]) in_vcequiv = _parse_velocity_convention(in_spec_ctype[:4]) out_ctype_conv = out_ctype[7] if len(out_ctype) > 4 else out_ctype[:4] if CTYPE_TO_PHYSICALTYPE[out_ctype_conv] == 'air wavelength': raise NotImplementedError("Conversion to air wavelength is not supported.") out_lin_cunit = (LINEAR_CUNIT_DICT[out_ctype_conv] if out_ctype_conv in LINEAR_CUNIT_DICT else outunit) out_vcequiv = _parse_velocity_convention(out_ctype_conv) # Load the input values crval_in = (mywcs.wcs.crval[mywcs.wcs.spec] * inunit) # the cdelt matrix may not be correctly populated: need to account for cd, # cdelt, and pc cdelt_in = (mywcs.pixel_scale_matrix[mywcs.wcs.spec, mywcs.wcs.spec] * inunit) if in_spec_ctype == 'AWAV': warnings.warn("Support for air wavelengths is experimental and only " "works in the forward direction (air->vac, not vac->air).", ExperimentalImplementationWarning ) cdelt_in = air_to_vac_deriv(crval_in) * cdelt_in crval_in = air_to_vac(crval_in) in_spec_ctype = 'WAVE' # 1. Convert input to input, linear if in_vcequiv is not None and ref_value is not None: crval_lin1 = crval_in.to(lin_cunit, u.spectral() + in_vcequiv(ref_value)) else: crval_lin1 = crval_in.to(lin_cunit, u.spectral()) cdelt_lin1 = cdelt_derivative(crval_in, cdelt_in, # equivalent: inunit.physical_type intype=CTYPE_TO_PHYSICALTYPE[in_spec_ctype[:4]], outtype=parse_phys_type(lin_cunit), rest=ref_value, linear=True ) # 2. Convert input, linear to output, linear if ref_value is None: if in_vcequiv is not None: pass # consider raising a ValueError here; not clear if this is valid crval_lin2 = crval_lin1.to(out_lin_cunit, u.spectral()) else: # at this stage, the transition can ONLY be relativistic, because the V # frame (as a linear frame) is only defined as "apparent velocity" crval_lin2 = crval_lin1.to(out_lin_cunit, u.spectral() + u.doppler_relativistic(ref_value)) # For cases like VRAD <-> FREQ and VOPT <-> WAVE, this will be linear too: linear_middle = in_vcequiv == out_vcequiv cdelt_lin2 = cdelt_derivative(crval_lin1, cdelt_lin1, intype=parse_phys_type(lin_cunit), outtype=CTYPE_TO_PHYSICALTYPE[out_ctype_conv], rest=ref_value, linear=linear_middle) # 3. Convert output, linear to output if out_vcequiv is not None and ref_value is not None: crval_out = crval_lin2.to(outunit, out_vcequiv(ref_value) + u.spectral()) #cdelt_out = cdelt_lin2.to(outunit, out_vcequiv(ref_value) + u.spectral()) cdelt_out = cdelt_derivative(crval_lin2, cdelt_lin2, intype=CTYPE_TO_PHYSICALTYPE[out_ctype_conv], outtype=parse_phys_type(outunit), rest=ref_value, linear=True ).to(outunit) else: crval_out = crval_lin2.to(outunit, u.spectral()) cdelt_out = cdelt_lin2.to(outunit, u.spectral()) if crval_out.unit != cdelt_out.unit: # this should not be possible, but it's a sanity check raise ValueError("Conversion failed: the units of cdelt and crval don't match.") # A cdelt of 0 would be meaningless if cdelt_out.value == 0: raise ValueError("Conversion failed: the output CDELT would be 0.") newwcs = mywcs.deepcopy() if hasattr(newwcs.wcs,'cd'): newwcs.wcs.cd[newwcs.wcs.spec, newwcs.wcs.spec] = cdelt_out.value # todo: would be nice to have an assertion here that no off-diagonal # values for the spectral WCS are nonzero, but this is a nontrivial # check else: newwcs.wcs.cdelt[newwcs.wcs.spec] = cdelt_out.value newwcs.wcs.cunit[newwcs.wcs.spec] = cdelt_out.unit.to_string(format='fits') newwcs.wcs.crval[newwcs.wcs.spec] = crval_out.value newwcs.wcs.ctype[newwcs.wcs.spec] = out_ctype if rest_value is not None: if parse_phys_type(rest_value.unit) == 'frequency': newwcs.wcs.restfrq = rest_value.to(u.Hz).value elif parse_phys_type(rest_value.unit) == 'length': newwcs.wcs.restwav = rest_value.to(u.m).value else: raise ValueError("Rest Value was specified, but not in frequency or length units") return newwcs def cdelt_derivative(crval, cdelt, intype, outtype, linear=False, rest=None): if intype == outtype: return cdelt elif set((outtype,intype)) == set(('length','frequency')): # Symmetric equations! return (-constants.c / crval**2 * cdelt).to(PHYS_UNIT_DICT[outtype]) elif outtype in ('frequency','length') and 'speed' in intype: if linear: numer = cdelt * rest.to(PHYS_UNIT_DICT[outtype], u.spectral()) denom = constants.c else: numer = cdelt * constants.c * rest.to(PHYS_UNIT_DICT[outtype], u.spectral()) denom = (constants.c + crval)*(constants.c**2 - crval**2)**0.5 if outtype == 'frequency': return (-numer/denom).to(PHYS_UNIT_DICT[outtype], u.spectral()) else: return (numer/denom).to(PHYS_UNIT_DICT[outtype], u.spectral()) elif 'speed' in outtype and intype in ('frequency','length'): if linear: numer = cdelt * constants.c denom = rest.to(PHYS_UNIT_DICT[intype], u.spectral()) else: numer = 4 * constants.c * crval * rest.to(crval.unit, u.spectral())**2 * cdelt denom = (crval**2 + rest.to(crval.unit, u.spectral())**2)**2 if intype == 'frequency': return (-numer/denom).to(PHYS_UNIT_DICT[outtype], u.spectral()) else: return (numer/denom).to(PHYS_UNIT_DICT[outtype], u.spectral()) elif intype == 'air wavelength': raise TypeError("Air wavelength should be converted to vacuum earlier.") elif outtype == 'air wavelength': raise TypeError("Conversion to air wavelength not supported.") else: raise ValueError("Invalid in/out frames") def air_to_vac(wavelength): """ Implements the air to vacuum wavelength conversion described in eqn 65 of Griesen 2006 """ wlum = wavelength.to(u.um).value return (1+1e-6*(287.6155+1.62887/wlum**2+0.01360/wlum**4)) * wavelength def vac_to_air(wavelength): """ Griesen 2006 reports that the error in naively inverting Eqn 65 is less than 10^-9 and therefore acceptable. This is therefore eqn 67 """ wlum = wavelength.to(u.um).value nl = (1+1e-6*(287.6155+1.62887/wlum**2+0.01360/wlum**4)) return wavelength/nl def air_to_vac_deriv(wavelength): """ Eqn 66 """ wlum = wavelength.to(u.um).value return (1+1e-6*(287.6155 - 1.62887/wlum**2 - 0.04080/wlum**4)) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1633017852.0 spectral-cube-0.6.0/spectral_cube/spectral_cube.py0000644000175100001710000050553400000000000021622 0ustar00runnerdocker""" A class to represent a 3-d position-position-velocity spectral cube. """ from __future__ import print_function, absolute_import, division import warnings from functools import wraps import operator import re import itertools import copy import tempfile import textwrap from pathlib import PosixPath import six from six.moves import zip, range import dask.array as da import astropy.wcs from astropy import units as u from astropy.io.fits import PrimaryHDU, BinTableHDU, Header, Card, HDUList from astropy.utils.console import ProgressBar from astropy import log from astropy import wcs from astropy import convolution from astropy import stats from astropy.constants import si from astropy.io.registry import UnifiedReadWriteMethod import numpy as np from radio_beam import Beam, Beams from . import cube_utils from . import wcs_utils from . import spectral_axis from .masks import (LazyMask, LazyComparisonMask, BooleanArrayMask, MaskBase, is_broadcastable_and_smaller) from .ytcube import ytCube from .lower_dimensional_structures import (Projection, Slice, OneDSpectrum, LowerDimensionalObject, VaryingResolutionOneDSpectrum ) from .base_class import (BaseNDClass, SpectralAxisMixinClass, DOPPLER_CONVENTIONS, SpatialCoordMixinClass, MaskableArrayMixinClass, MultiBeamMixinClass, HeaderMixinClass, BeamMixinClass, ) from .utils import (cached, warn_slow, VarianceWarning, BeamWarning, UnsupportedIterationStrategyWarning, WCSMismatchWarning, NotImplementedWarning, SliceWarning, SmoothingWarning, StokesWarning, ExperimentalImplementationWarning, BeamAverageWarning, NonFiniteBeamsWarning, BeamWarning, WCSCelestialError, BeamUnitsError) from .spectral_axis import (determine_vconv_from_ctype, get_rest_value_from_wcs, doppler_beta, doppler_gamma, doppler_z) from .io.core import SpectralCubeRead, SpectralCubeWrite from distutils.version import LooseVersion __all__ = ['BaseSpectralCube', 'SpectralCube', 'VaryingResolutionSpectralCube'] # apply_everywhere, world: do not have a valid cube to test on __doctest_skip__ = ['BaseSpectralCube._apply_everywhere'] try: from scipy import ndimage scipyOK = True except ImportError: scipyOK = False warnings.filterwarnings('ignore', category=wcs.FITSFixedWarning, append=True) SIGMA2FWHM = 2. * np.sqrt(2. * np.log(2.)) # convenience structures to keep track of the reversed index # conventions between WCS and numpy np2wcs = {2: 0, 1: 1, 0: 2} _NP_DOC = """ Ignores excluded mask elements. Parameters ---------- axis : int (optional) The axis to collapse, or None to perform a global aggregation how : cube | slice | ray | auto How to compute the aggregation. All strategies give the same result, but certain strategies are more efficient depending on data size and layout. Cube/slice/ray iterate over decreasing subsets of the data, to conserve memory. Default='auto' """.replace('\n', '\n ') def aggregation_docstring(func): @wraps(func) def wrapper(*args, **kwargs): return func(*args, **kwargs) wrapper.__doc__ += _NP_DOC return wrapper _PARALLEL_DOC = """ Other Parameters ---------------- parallel : bool Use joblib to parallelize the operation. If set to ``False``, will force the use of a single core without using ``joblib``. num_cores : int or None The number of cores to use when applying this function in parallel across the cube. use_memmap : bool If specified, a memory mapped temporary file on disk will be written to rather than storing the intermediate spectra in memory. """ def parallel_docstring(func): @wraps(func) def wrapper(*args, **kwargs): return func(*args, **kwargs) line1 = wrapper.__doc__.split("\n")[1] indentation = " "*(len(line1) - len(line1.lstrip())) try: wrapper.__doc__ += textwrap.indent(_PARALLEL_DOC, indentation) except AttributeError: # python2.7 wrapper.__doc__ = textwrap.dedent(wrapper.__doc__) + _PARALLEL_DOC return wrapper def _apply_spectral_function(arguments, outcube, function, **kwargs): """ Helper function to apply a function to a spectrum. Needs to be declared toward the top of the code to allow pickling by joblib. """ (spec, includemask, ii, jj) = arguments if np.any(includemask): outcube[:,jj,ii] = function(spec, **kwargs) else: outcube[:,jj,ii] = spec def _apply_spatial_function(arguments, outcube, function, **kwargs): """ Helper function to apply a function to an image. Needs to be declared toward the top of the code to allow pickling by joblib. """ (img, includemask, ii) = arguments if np.any(includemask): outcube[ii, :, :] = function(img, **kwargs) else: outcube[ii, :, :] = img class BaseSpectralCube(BaseNDClass, MaskableArrayMixinClass, SpectralAxisMixinClass, SpatialCoordMixinClass, HeaderMixinClass): def __init__(self, data, wcs, mask=None, meta=None, fill_value=np.nan, header=None, allow_huge_operations=False, wcs_tolerance=0.0): # Deal with metadata first because it can affect data reading self._meta = meta or {} # must extract unit from data before stripping it if 'BUNIT' in self._meta: self._unit = cube_utils.convert_bunit(self._meta["BUNIT"]) elif hasattr(data, 'unit'): self._unit = data.unit else: self._unit = None # data must not be a quantity when stored in self._data if hasattr(data, 'unit'): # strip the unit so that it can be treated as cube metadata data = data.value # TODO: mask should be oriented? Or should we assume correctly oriented here? self._data, self._wcs = cube_utils._orient(data, wcs) self._wcs_tolerance = wcs_tolerance self._spectral_axis = None self._mask = mask # specifies which elements to Nan/blank/ignore # object or array-like object, given that WCS needs # to be consistent with data? #assert mask._wcs == self._wcs self._fill_value = fill_value self._header = Header() if header is None else header if not isinstance(self._header, Header): raise TypeError("If a header is given, it must be a fits.Header") # We don't pass the spectral unit via the initializer since the user # should be using ``with_spectral_unit`` if they want to set it. # However, we do want to keep track of what units the spectral axis # should be returned in, otherwise astropy's WCS can change the units, # e.g. km/s -> m/s. # This can be overridden with Header below self._spectral_unit = u.Unit(self._wcs.wcs.cunit[2]) # This operation is kind of expensive? header_specaxnum = astropy.wcs.WCS(header).wcs.spec header_specaxunit = spectral_axis.unit_from_header(self._header, spectral_axis_number=header_specaxnum+1) # Allow the original header spectral axis unit to override the default # unit if header_specaxunit is not None: self._spectral_unit = header_specaxunit self._spectral_scale = spectral_axis.wcs_unit_scale(self._spectral_unit) self.allow_huge_operations = allow_huge_operations self._cache = {} @property def _is_huge(self): return cube_utils.is_huge(self) @property def _new_thing_with(self): return self._new_cube_with def _new_cube_with(self, data=None, wcs=None, mask=None, meta=None, fill_value=None, spectral_unit=None, unit=None, wcs_tolerance=None, **kwargs): data = self._data if data is None else data if unit is None and hasattr(data, 'unit'): if data.unit != self.unit: raise u.UnitsError("New data unit '{0}' does not" " match cube unit '{1}'. You can" " override this by specifying the" " `unit` keyword." .format(data.unit, self.unit)) unit = data.unit elif unit is not None: # convert string units to Units if not isinstance(unit, u.Unit): unit = u.Unit(unit) if hasattr(data, 'unit'): if u.Unit(unit) != data.unit: raise u.UnitsError("The specified new cube unit '{0}' " "does not match the input unit '{1}'." .format(unit, data.unit)) else: data = u.Quantity(data, unit=unit, copy=False) elif self._unit is not None: unit = self.unit wcs = self._wcs if wcs is None else wcs mask = self._mask if mask is None else mask if meta is None: meta = {} meta.update(self._meta) if unit is not None: meta['BUNIT'] = unit.to_string(format='FITS') fill_value = self._fill_value if fill_value is None else fill_value spectral_unit = self._spectral_unit if spectral_unit is None else u.Unit(spectral_unit) cube = self.__class__(data=data, wcs=wcs, mask=mask, meta=meta, fill_value=fill_value, header=self._header, allow_huge_operations=self.allow_huge_operations, wcs_tolerance=wcs_tolerance or self._wcs_tolerance, **kwargs) cube._spectral_unit = spectral_unit cube._spectral_scale = spectral_axis.wcs_unit_scale(spectral_unit) return cube read = UnifiedReadWriteMethod(SpectralCubeRead) write = UnifiedReadWriteMethod(SpectralCubeWrite) @property def unit(self): """ The flux unit """ if self._unit: return self._unit else: return u.one @property def shape(self): """ Length of cube along each axis """ return self._data.shape @property def size(self): """ Number of elements in the cube """ return self._data.size @property def base(self): """ The data type 'base' of the cube - useful for, e.g., joblib """ return self._data.base def __len__(self): return self.shape[0] @property def ndim(self): """ Dimensionality of the data """ return self._data.ndim def __repr__(self): s = "{1} with shape={0}".format(self.shape, self.__class__.__name__) if self.unit is u.one: s += ":\n" else: s += " and unit={0}:\n".format(self.unit) s += (" n_x: {0:6d} type_x: {1:8s} unit_x: {2:5s}" " range: {3:12.6f}:{4:12.6f}\n".format(self.shape[2], self.wcs.wcs.ctype[0], self.wcs.wcs.cunit[0], self.longitude_extrema[0], self.longitude_extrema[1],)) s += (" n_y: {0:6d} type_y: {1:8s} unit_y: {2:5s}" " range: {3:12.6f}:{4:12.6f}\n".format(self.shape[1], self.wcs.wcs.ctype[1], self.wcs.wcs.cunit[1], self.latitude_extrema[0], self.latitude_extrema[1], )) s += (" n_s: {0:6d} type_s: {1:8s} unit_s: {2:5s}" " range: {3:12.3f}:{4:12.3f}".format(self.shape[0], self.wcs.wcs.ctype[2], self._spectral_unit, self.spectral_extrema[0], self.spectral_extrema[1], )) return s @property @cached def spectral_extrema(self): _spectral_min = self.spectral_axis.min() _spectral_max = self.spectral_axis.max() return u.Quantity((_spectral_min, _spectral_max)) def apply_numpy_function(self, function, fill=np.nan, reduce=True, how='auto', projection=False, unit=None, check_endian=False, progressbar=False, includemask=False, **kwargs): """ Apply a numpy function to the cube Parameters ---------- function : Numpy ufunc A numpy ufunc to apply to the cube fill : float The fill value to use on the data reduce : bool reduce indicates whether this is a reduce-like operation, that can be accumulated one slice at a time. sum/max/min are like this. argmax/argmin/stddev are not how : cube | slice | ray | auto How to compute the moment. All strategies give the same result, but certain strategies are more efficient depending on data size and layout. Cube/slice/ray iterate over decreasing subsets of the data, to conserve memory. Default='auto' projection : bool Return a :class:`~spectral_cube.lower_dimensional_structures.Projection` if the resulting array is 2D or a OneDProjection if the resulting array is 1D and the sum is over both spatial axes? unit : None or `astropy.units.Unit` The unit to include for the output array. For example, `SpectralCube.max` calls ``SpectralCube.apply_numpy_function(np.max, unit=self.unit)``, inheriting the unit from the original cube. However, for other numpy functions, e.g. `numpy.argmax`, the return is an index and therefore unitless. check_endian : bool A flag to check the endianness of the data before applying the function. This is only needed for optimized functions, e.g. those in the `bottleneck `_ package. progressbar : bool Show a progressbar while iterating over the slices through the cube? kwargs : dict Passed to the numpy function. Returns ------- result : :class:`~spectral_cube.lower_dimensional_structures.Projection` or `~astropy.units.Quantity` or float The result depends on the value of ``axis``, ``projection``, and ``unit``. If ``axis`` is None, the return will be a scalar with or without units. If axis is an integer, the return will be a :class:`~spectral_cube.lower_dimensional_structures.Projection` if ``projection`` is set """ # leave axis in kwargs to avoid overriding numpy defaults, e.g. if the # default is axis=-1, we don't want to force it to be axis=None by # specifying that in the function definition axis = kwargs.get('axis', None) if how == 'auto': strategy = cube_utils.iterator_strategy(self, axis) else: strategy = how out = None log.debug("applying numpy function {0} with strategy {1}" .format(function, strategy)) if strategy == 'slice' and reduce: out = self._reduce_slicewise(function, fill, check_endian, includemask=includemask, progressbar=progressbar, **kwargs) elif how == 'ray': out = self.apply_function(function, **kwargs) elif how not in ['auto', 'cube']: warnings.warn("Cannot use how=%s. Using how=cube" % how, UnsupportedIterationStrategyWarning) if out is None: out = function(self._get_filled_data(fill=fill, check_endian=check_endian), **kwargs) if axis is None: # return is scalar if unit is not None: return u.Quantity(out, unit=unit) else: return out elif projection and reduce: meta = {'collapse_axis': axis} meta.update(self._meta) if hasattr(axis, '__len__') and len(axis) == 2: # if operation is over two spatial dims if set(axis) == set((1,2)): new_wcs = self._wcs.sub([wcs.WCSSUB_SPECTRAL]) header = self._nowcs_header if cube_utils._has_beam(self): bmarg = {'beam': self.beam} elif cube_utils._has_beams(self): bmarg = {'beams': self.unmasked_beams} else: bmarg = {} return self._oned_spectrum(value=out, wcs=new_wcs, copy=False, unit=unit, header=header, meta=meta, spectral_unit=self._spectral_unit, **bmarg ) else: warnings.warn("Averaging over a spatial and a spectral " "dimension cannot produce a Projection " "quantity (no units or WCS are preserved).", SliceWarning ) return out else: new_wcs = wcs_utils.drop_axis(self._wcs, np2wcs[axis]) header = self._nowcs_header return Projection(out, copy=False, wcs=new_wcs, meta=meta, unit=unit, header=header) else: return out def _reduce_slicewise(self, function, fill, check_endian, includemask=False, progressbar=False, **kwargs): """ Compute a numpy aggregation by grabbing one slice at a time """ ax = kwargs.pop('axis', None) full_reduce = ax is None ax = ax or 0 if isinstance(ax, tuple): assert len(ax) == 2 # we only work with cubes... iterax = [x for x in range(3) if x not in ax][0] else: iterax = ax log.debug("reducing slicewise with axis = {0}".format(ax)) if includemask: planes = self._iter_mask_slices(iterax) else: planes = self._iter_slices(iterax, fill=fill, check_endian=check_endian) result = next(planes) if progressbar: progressbar = ProgressBar(self.shape[iterax]) pbu = progressbar.update else: pbu = lambda: True if isinstance(ax, tuple): # have to make a result a list of itself, since we already "got" # the first plane above result = [function(result, axis=(0,1), **kwargs)] for plane in planes: # apply to axes 0 and 1, because we're fully reducing the plane # to a number if we're applying over two axes result.append(function(plane, axis=(0,1), **kwargs)) pbu() result = np.array(result) else: for plane in planes: # axis = 2 means we're stacking two planes, the previously # computed one and the current one result = function(np.dstack((result, plane)), axis=2, **kwargs) pbu() if full_reduce: result = function(result) return result def get_mask_array(self): """ Convert the mask to a boolean numpy array """ return self._mask.include(data=self._data, wcs=self._wcs, wcs_tolerance=self._wcs_tolerance) def _naxes_dropped(self, view): """ Determine how many axes are being selected given a view. (1,2) -> 2 None -> 3 1 -> 1 2 -> 1 """ if hasattr(view,'__len__'): return len(view) elif view is None: return 3 else: return 1 @aggregation_docstring @warn_slow def sum(self, axis=None, how='auto', **kwargs): """ Return the sum of the cube, optionally over an axis. """ from .np_compat import allbadtonan projection = self._naxes_dropped(axis) in (1,2) return self.apply_numpy_function(allbadtonan(np.nansum), fill=np.nan, how=how, axis=axis, unit=self.unit, projection=projection, **kwargs) @aggregation_docstring @warn_slow def mean(self, axis=None, how='cube', **kwargs): """ Return the mean of the cube, optionally over an axis. """ projection = self._naxes_dropped(axis) in (1,2) if how == 'slice': # two-pass approach: first total the # of points, # then total the value of the points, then divide # (a one-pass approach is possible but requires # more sophisticated bookkeeping) counts = self._count_nonzero_slicewise(axis=axis, progressbar=kwargs.get('progressbar')) ttl = self.apply_numpy_function(np.nansum, fill=np.nan, how=how, axis=axis, unit=None, projection=False, **kwargs) out = ttl / counts if projection: if self._naxes_dropped(axis) == 1: new_wcs = wcs_utils.drop_axis(self._wcs, np2wcs[axis]) meta = {'collapse_axis': axis} meta.update(self._meta) return Projection(out, copy=False, wcs=new_wcs, meta=meta, unit=self.unit, header=self._nowcs_header) elif axis == (1,2): newwcs = self._wcs.sub([wcs.WCSSUB_SPECTRAL]) if cube_utils._has_beam(self): bmarg = {'beam': self.beam} elif cube_utils._has_beams(self): bmarg = {'beams': self.unmasked_beams} else: bmarg = {} return self._oned_spectrum(value=out, wcs=newwcs, copy=False, unit=self.unit, spectral_unit=self._spectral_unit, meta=self.meta, **bmarg ) else: # this is a weird case, but even if projection is # specified, we can't return a Quantity here because of WCS # issues. `apply_numpy_function` already does this # silently, which is unfortunate. warnings.warn("Averaging over a spatial and a spectral " "dimension cannot produce a Projection " "quantity (no units or WCS are preserved).", SliceWarning ) return out else: return out return self.apply_numpy_function(np.nanmean, fill=np.nan, how=how, axis=axis, unit=self.unit, projection=projection, **kwargs) def _count_nonzero_slicewise(self, axis=None, progressbar=False): """ Count the number of finite pixels along an axis slicewise. This is a helper function for the mean and std deviation slicewise iterators. """ counts = self.apply_numpy_function(np.sum, fill=np.nan, how='slice', axis=axis, unit=None, projection=False, progressbar=progressbar, includemask=True) return counts @aggregation_docstring @warn_slow def std(self, axis=None, how='cube', ddof=0, **kwargs): """ Return the standard deviation of the cube, optionally over an axis. Other Parameters ---------------- ddof : int Means Delta Degrees of Freedom. The divisor used in calculations is ``N - ddof``, where ``N`` represents the number of elements. By default ``ddof`` is zero. """ projection = self._naxes_dropped(axis) in (1,2) if how == 'slice': if axis is None: raise NotImplementedError("The overall standard deviation " "cannot be computed in a slicewise " "manner. Please use a " "different strategy.") if hasattr(axis, '__len__') and len(axis) == 2: return self.apply_numpy_function(np.nanstd, axis=axis, how='slice', projection=projection, unit=self.unit, **kwargs) else: counts = self._count_nonzero_slicewise(axis=axis) ttl = self.apply_numpy_function(np.nansum, fill=np.nan, how='slice', axis=axis, unit=None, projection=False, **kwargs) # Equivalent, but with more overhead: # ttl = self.sum(axis=axis, how='slice').value mean = ttl/counts planes = self._iter_slices(axis, fill=np.nan, check_endian=False) result = (next(planes)-mean)**2 for plane in planes: result = np.nansum(np.dstack((result, (plane-mean)**2)), axis=2) out = (result/(counts-ddof))**0.5 if projection: new_wcs = wcs_utils.drop_axis(self._wcs, np2wcs[axis]) meta = {'collapse_axis': axis} meta.update(self._meta) return Projection(out, copy=False, wcs=new_wcs, meta=meta, unit=self.unit, header=self._nowcs_header) else: return out # standard deviation cannot be computed as a trivial step-by-step # process. There IS a one-pass algorithm for std dev, but it is not # implemented, so we must force cube here. We could and should also # implement raywise reduction return self.apply_numpy_function(np.nanstd, fill=np.nan, how=how, axis=axis, unit=self.unit, projection=projection, **kwargs) @aggregation_docstring @warn_slow def mad_std(self, axis=None, how='cube', **kwargs): """ Use astropy's mad_std to computer the standard deviation """ if int(astropy.__version__[0]) < 2: raise NotImplementedError("mad_std requires astropy >= 2") projection = self._naxes_dropped(axis) in (1,2) if how == 'ray' and not hasattr(axis, '__len__'): # no need for fill here; masked-out data are simply not included return self.apply_numpy_function(stats.mad_std, axis=axis, how='ray', unit=self.unit, projection=projection, ignore_nan=True, ) elif how == 'slice' and hasattr(axis, '__len__') and len(axis) == 2: return self.apply_numpy_function(stats.mad_std, axis=axis, how='slice', projection=projection, unit=self.unit, fill=np.nan, ignore_nan=True, **kwargs) elif how in ('ray', 'slice'): raise NotImplementedError('Cannot run mad_std slicewise or raywise ' 'unless the dimensionality is also reduced in the same direction.') else: return self.apply_numpy_function(stats.mad_std, fill=np.nan, axis=axis, unit=self.unit, ignore_nan=True, how=how, projection=projection, **kwargs) @aggregation_docstring @warn_slow def max(self, axis=None, how='auto', **kwargs): """ Return the maximum data value of the cube, optionally over an axis. """ projection = self._naxes_dropped(axis) in (1,2) return self.apply_numpy_function(np.nanmax, fill=np.nan, how=how, axis=axis, unit=self.unit, projection=projection, **kwargs) @aggregation_docstring @warn_slow def min(self, axis=None, how='auto', **kwargs): """ Return the minimum data value of the cube, optionally over an axis. """ projection = self._naxes_dropped(axis) in (1,2) return self.apply_numpy_function(np.nanmin, fill=np.nan, how=how, axis=axis, unit=self.unit, projection=projection, **kwargs) @aggregation_docstring @warn_slow def argmax(self, axis=None, how='auto', **kwargs): """ Return the index of the maximum data value. The return value is arbitrary if all pixels along ``axis`` are excluded from the mask. """ return self.apply_numpy_function(np.nanargmax, fill=-np.inf, reduce=False, projection=False, how=how, axis=axis, **kwargs) @aggregation_docstring @warn_slow def argmin(self, axis=None, how='auto', **kwargs): """ Return the index of the minimum data value. The return value is arbitrary if all pixels along ``axis`` are excluded from the mask """ return self.apply_numpy_function(np.nanargmin, fill=np.inf, reduce=False, projection=False, how=how, axis=axis, **kwargs) def _argmaxmin_world(self, axis, method, **kwargs): ''' Return the spatial or spectral index of the maximum or minimum value. Use `argmax_world` and `argmin_world` directly. ''' operation_name = '{}_world'.format(method) if wcs_utils.is_pixel_axis_to_wcs_correlated(self.wcs, axis): raise WCSCelestialError("{} requires the celestial axes" " to be aligned along image axes." .format(operation_name)) if method == 'argmin': arg_pixel_plane = self.argmin(axis=axis, **kwargs) elif method == 'argmax': arg_pixel_plane = self.argmax(axis=axis, **kwargs) else: raise ValueError("`method` must be 'argmin' or 'argmax'") # Convert to WCS coordinates. out = cube_utils.world_take_along_axis(self, arg_pixel_plane, axis) # Compute whether the mask has any valid data along `axis` collapsed_mask = self.mask.include().any(axis=axis) out[~collapsed_mask] = np.NaN # Return a Projection. new_wcs = wcs_utils.drop_axis(self._wcs, np2wcs[axis]) meta = {'collapse_axis': axis} meta.update(self._meta) return Projection(out, copy=False, wcs=new_wcs, meta=meta, unit=out.unit, header=self._nowcs_header) @warn_slow def argmax_world(self, axis, **kwargs): ''' Return the spatial or spectral index of the maximum value along a line of sight. Parameters ---------- axis : int The axis to return the peak location along. e.g., `axis=0` will return the value of the spectral axis at the peak value. kwargs : dict Passed to `~SpectralCube.argmax`. ''' return self._argmaxmin_world(axis, 'argmax', **kwargs) @warn_slow def argmin_world(self, axis, **kwargs): ''' Return the spatial or spectral index of the minimum value along a line of sight. Parameters ---------- axis : int The axis to return the peak location along. e.g., `axis=0` will return the value of the spectral axis at the peak value. kwargs : dict Passed to `~SpectralCube.argmin`. ''' return self._argmaxmin_world(axis, 'argmin', **kwargs) def chunked(self, chunksize=1000): """ Not Implemented. Iterate over chunks of valid data """ raise NotImplementedError() def _get_flat_shape(self, axis): """ Get the shape of the array after flattening along an axis """ iteraxes = [0, 1, 2] iteraxes.remove(axis) # x,y are defined as first,second dim to iterate over # (not x,y in pixel space...) nx = self.shape[iteraxes[0]] ny = self.shape[iteraxes[1]] return nx, ny @warn_slow def _apply_everywhere(self, function, *args): """ Return a new cube with ``function`` applied to all pixels Private because this doesn't have an obvious and easy-to-use API Examples -------- >>> newcube = cube.apply_everywhere(np.add, 0.5*u.Jy) """ try: test_result = function(np.ones([1,1,1])*self.unit, *args) # First, check that function returns same # of dims? assert test_result.ndim == 3,"Output is not 3-dimensional" except Exception as ex: raise AssertionError("Function could not be applied to a simple " "cube. The error was: {0}".format(ex)) data = function(u.Quantity(self._get_filled_data(fill=self._fill_value), self.unit, copy=False), *args) return self._new_cube_with(data=data, unit=data.unit) @warn_slow def _cube_on_cube_operation(self, function, cube, equivalencies=[], **kwargs): """ Apply an operation between two cubes. Inherits the metadata of the left cube. Parameters ---------- function : function A function to apply to the cubes cube : SpectralCube Another cube to put into the function equivalencies : list A list of astropy equivalencies kwargs : dict Passed to np.testing.assert_almost_equal """ assert cube.shape == self.shape if not self.unit.is_equivalent(cube.unit, equivalencies=equivalencies): raise u.UnitsError("{0} is not equivalent to {1}" .format(self.unit, cube.unit)) if not wcs_utils.check_equality(self.wcs, cube.wcs, warn_missing=True, **kwargs): warnings.warn("Cube WCSs do not match, but their shapes do", WCSMismatchWarning) try: test_result = function(np.ones([1,1,1])*self.unit, np.ones([1,1,1])*self.unit) # First, check that function returns same # of dims? assert test_result.shape == (1,1,1) except Exception as ex: raise AssertionError("Function {1} could not be applied to a " "pair of simple " "cube. The error was: {0}".format(ex, function)) cube = cube.to(self.unit) data = function(self._data, cube._data) try: # multiplication, division, etc. are valid inter-unit operations unit = function(self.unit, cube.unit) except TypeError: # addition, subtraction are not unit = self.unit return self._new_cube_with(data=data, unit=unit) def apply_function(self, function, axis=None, weights=None, unit=None, projection=False, progressbar=False, update_function=None, keep_shape=False, **kwargs): """ Apply a function to valid data along the specified axis or to the whole cube, optionally using a weight array that is the same shape (or at least can be sliced in the same way) Parameters ---------- function : function A function that can be applied to a numpy array. Does not need to be nan-aware axis : 1, 2, 3, or None The axis to operate along. If None, the return is scalar. weights : (optional) np.ndarray An array with the same shape (or slicing abilities/results) as the data cube unit : (optional) `~astropy.units.Unit` The unit of the output projection or value. Not all functions should return quantities with units. projection : bool Return a projection if the resulting array is 2D? progressbar : bool Show a progressbar while iterating over the slices/rays through the cube? keep_shape : bool If `True`, the returned object will be the same dimensionality as the cube. update_function : function An alternative tracker for the progress of applying the function to the cube data. If ``progressbar`` is ``True``, this argument is ignored. Returns ------- result : :class:`~spectral_cube.lower_dimensional_structures.Projection` or `~astropy.units.Quantity` or float The result depends on the value of ``axis``, ``projection``, and ``unit``. If ``axis`` is None, the return will be a scalar with or without units. If axis is an integer, the return will be a :class:`~spectral_cube.lower_dimensional_structures.Projection` if ``projection`` is set """ if axis is None: out = function(self.flattened(), **kwargs) if unit is not None: return u.Quantity(out, unit=unit) else: return out if hasattr(axis, '__len__'): raise NotImplementedError("`apply_function` does not support " "function application across multiple " "axes. Try `apply_numpy_function`.") # determine the output array shape nx, ny = self._get_flat_shape(axis) nz = self.shape[axis] if keep_shape else 1 # allocate memory for output array out = np.empty([nz, nx, ny]) * np.nan if progressbar: progressbar = ProgressBar(nx*ny) pbu = progressbar.update elif update_function is not None: pbu = update_function else: pbu = lambda: True # iterate over "lines of sight" through the cube for y, x, slc in self._iter_rays(axis): # acquire the flattened, valid data for the slice data = self.flattened(slc, weights=weights) if len(data) != 0: result = function(data, **kwargs) if hasattr(result, 'value'): # store result in array out[:, y, x] = result.value else: out[:, y, x] = result pbu() if not keep_shape: out = out[0, :, :] if projection and axis in (0, 1, 2): new_wcs = wcs_utils.drop_axis(self._wcs, np2wcs[axis]) meta = {'collapse_axis': axis} meta.update(self._meta) return Projection(out, copy=False, wcs=new_wcs, meta=meta, unit=unit, header=self._nowcs_header) else: return out def _iter_rays(self, axis=None): """ Iterate over view corresponding to lines-of-sight through a cube along the specified axis """ ny, nx = self._get_flat_shape(axis) for y in range(ny): for x in range(nx): # create length-1 view for each position slc = [slice(y, y + 1), slice(x, x + 1), ] # create a length-N slice (all-inclusive) along the selected axis slc.insert(axis, slice(None)) yield y, x, tuple(slc) def _iter_slices(self, axis, fill=np.nan, check_endian=False): """ Iterate over the cube one slice at a time, replacing masked elements with fill """ view = [slice(None)] * 3 for x in range(self.shape[axis]): view[axis] = x yield self._get_filled_data(view=tuple(view), fill=fill, check_endian=check_endian) def _iter_mask_slices(self, axis): """ Iterate over the cube one slice at a time, replacing masked elements with fill """ view = [slice(None)] * 3 for x in range(self.shape[axis]): view[axis] = x yield self._mask.include(data=self._data, view=tuple(view), wcs=self._wcs, wcs_tolerance=self._wcs_tolerance, ) def flattened(self, slice=(), weights=None): """ Return a slice of the cube giving only the valid data (i.e., removing bad values) Parameters ---------- slice: 3-tuple A length-3 tuple of view (or any equivalent valid slice of a cube) weights: (optional) np.ndarray An array with the same shape (or slicing abilities/results) as the data cube """ data = self._mask._flattened(data=self._data, wcs=self._wcs, view=slice) if isinstance(data, da.Array): # Quantity does not work well with lazily evaluated data with an # unkonwn shape (which is the case when doing boolean indexing of arrays) data = self._compute(data) if weights is not None: weights = self._mask._flattened(data=weights, wcs=self._wcs, view=slice) return u.Quantity(data * weights, self.unit, copy=False) else: return u.Quantity(data, self.unit, copy=False) def median(self, axis=None, iterate_rays=False, **kwargs): """ Compute the median of an array, optionally along an axis. Ignores excluded mask elements. Parameters ---------- axis : int (optional) The axis to collapse iterate_rays : bool Iterate over individual rays? This mode is slower but can save RAM costs, which may be extreme for large cubes Returns ------- med : ndarray The median """ try: from bottleneck import nanmedian bnok = True except ImportError: bnok = False # slicewise median is nonsense, must force how = 'cube' # bottleneck.nanmedian does not allow axis to be a list or tuple if bnok and not iterate_rays and not isinstance(axis, (list, tuple)): log.debug("Using bottleneck nanmedian") result = self.apply_numpy_function(nanmedian, axis=axis, projection=True, unit=self.unit, how='cube', check_endian=True, **kwargs) elif hasattr(np, 'nanmedian') and not iterate_rays: log.debug("Using numpy nanmedian") result = self.apply_numpy_function(np.nanmedian, axis=axis, projection=True, unit=self.unit, how='cube',**kwargs) else: log.debug("Using numpy median iterating over rays") result = self.apply_function(np.median, projection=True, axis=axis, unit=self.unit, **kwargs) return result def percentile(self, q, axis=None, iterate_rays=False, **kwargs): """ Return percentiles of the data. Parameters ---------- q : float The percentile to compute axis : int, or None Which axis to compute percentiles over iterate_rays : bool Iterate over individual rays? This mode is slower but can save RAM costs, which may be extreme for large cubes """ if hasattr(np, 'nanpercentile') and not iterate_rays: result = self.apply_numpy_function(np.nanpercentile, q=q, axis=axis, projection=True, unit=self.unit, how='cube', **kwargs) else: result = self.apply_function(np.percentile, q=q, axis=axis, projection=True, unit=self.unit, **kwargs) return result def with_mask(self, mask, inherit_mask=True, wcs_tolerance=None): """ Return a new SpectralCube instance that contains a composite mask of the current SpectralCube and the new ``mask``. Values of the mask that are ``True`` will be *included* (masks are analogous to numpy boolean index arrays, they are the inverse of the ``.mask`` attribute of a numpy masked array). Parameters ---------- mask : :class:`~spectral_cube.masks.MaskBase` instance, or boolean numpy array The mask to apply. If a boolean array is supplied, it will be converted into a mask, assuming that `True` values indicate included elements. inherit_mask : bool (optional, default=True) If True, combines the provided mask with the mask currently attached to the cube wcs_tolerance : None or float The tolerance of difference in WCS parameters between the cube and the mask. Defaults to `self._wcs_tolerance` (which itself defaults to 0.0) if unspecified Returns ------- new_cube : :class:`SpectralCube` A cube with the new mask applied. Notes ----- This operation returns a view into the data, and not a copy. """ if isinstance(mask, np.ndarray): if not is_broadcastable_and_smaller(mask.shape, self._data.shape): raise ValueError("Mask shape is not broadcastable to data shape: " "%s vs %s" % (mask.shape, self._data.shape)) mask = BooleanArrayMask(mask, self._wcs, shape=self._data.shape) if self._mask is not None and inherit_mask: new_mask = np.bitwise_and(self._mask, mask) else: new_mask = mask new_mask._validate_wcs(new_data=self._data, new_wcs=self._wcs, wcs_tolerance=wcs_tolerance or self._wcs_tolerance) return self._new_cube_with(mask=new_mask, wcs_tolerance=wcs_tolerance) def __getitem__(self, view): # Need to allow self[:], self[:,:] if isinstance(view, (slice,int,np.int64)): view = (view, slice(None), slice(None)) elif len(view) == 2: view = view + (slice(None),) elif len(view) > 3: raise IndexError("Too many indices") meta = {} meta.update(self._meta) slice_data = [(s.start, s.stop, s.step) if hasattr(s,'start') else s for s in view] if 'slice' in meta: meta['slice'].append(slice_data) else: meta['slice'] = [slice_data] intslices = [2-ii for ii,s in enumerate(view) if not hasattr(s,'start')] if intslices: if len(intslices) > 1: if 2 in intslices: raise NotImplementedError("1D slices along non-spectral " "axes are not yet implemented.") newwcs = self._wcs.sub([a for a in (1,2,3) if a not in [x+1 for x in intslices]]) if cube_utils._has_beam(self): bmarg = {'beam': self.beam} elif cube_utils._has_beams(self): bmarg = {'beams': self.beams} else: bmarg = {} return self._oned_spectrum(value=self._data[view], wcs=newwcs, copy=False, unit=self.unit, spectral_unit=self._spectral_unit, mask=self.mask[view] if self.mask is not None else None, meta=meta, **bmarg ) # only one element, so drop an axis newwcs = wcs_utils.drop_axis(self._wcs, intslices[0]) header = self._nowcs_header if intslices[0] == 0: # celestial: can report the wavelength/frequency of the axis header['CRVAL3'] = self.spectral_axis[intslices[0]].value header['CDELT3'] = self.wcs.sub([wcs.WCSSUB_SPECTRAL]).wcs.cdelt[0] header['CUNIT3'] = self._spectral_unit.to_string(format='FITS') return Slice(value=self.filled_data[view], mask=self.mask[view] if self.mask is not None else None, wcs=newwcs, copy=False, unit=self.unit, header=header, meta=meta) newmask = self._mask[view] if self._mask is not None else None newwcs = wcs_utils.slice_wcs(self._wcs, view, shape=self.shape) return self._new_cube_with(data=self._data[view], wcs=newwcs, mask=newmask, meta=meta) @property def unitless(self): """Return a copy of self with unit set to None""" newcube = self._new_cube_with() newcube._unit = None return newcube def with_spectral_unit(self, unit, velocity_convention=None, rest_value=None): """ Returns a new Cube with a different Spectral Axis unit Parameters ---------- unit : :class:`~astropy.units.Unit` Any valid spectral unit: velocity, (wave)length, or frequency. Only vacuum units are supported. velocity_convention : 'relativistic', 'radio', or 'optical' The velocity convention to use for the output velocity axis. Required if the output type is velocity. This can be either one of the above strings, or an `astropy.units` equivalency. rest_value : :class:`~astropy.units.Quantity` A rest wavelength or frequency with appropriate units. Required if output type is velocity. The cube's WCS should include this already if the *input* type is velocity, but the WCS's rest wavelength/frequency can be overridden with this parameter. .. note: This must be the rest frequency/wavelength *in vacuum*, even if your cube has air wavelength units """ newwcs,newmeta = self._new_spectral_wcs(unit=unit, velocity_convention=velocity_convention, rest_value=rest_value) if self._mask is not None: newmask = self._mask.with_spectral_unit(unit, velocity_convention=velocity_convention, rest_value=rest_value) newmask._wcs = newwcs else: newmask = None cube = self._new_cube_with(wcs=newwcs, mask=newmask, meta=newmeta, spectral_unit=unit) return cube @cube_utils.slice_syntax def unmasked_data(self, view): """ Return a view of the subset of the underlying data, ignoring the mask. Returns ------- data : Quantity instance The unmasked data """ values = self._data[view] # Astropy Quantities don't play well with dask arrays with shape () if isinstance(values, da.Array) and values.shape == (): values = values.compute() return u.Quantity(values, self.unit, copy=False) def unmasked_copy(self): """ Return a copy of the cube with no mask (i.e., all data included) """ newcube = self._new_cube_with() newcube._mask = None return newcube @cached def _pix_cen(self): """ Offset of every pixel from the origin, along each direction Returns ------- tuple of spectral_offset, y_offset, x_offset, each 3D arrays describing the distance from the origin Notes ----- These arrays are broadcast, and are not memory intensive Each array is in the units of the corresponding wcs.cunit, but this is implicit (e.g., they are not astropy Quantity arrays) """ # Start off by extracting the world coordinates of the pixels _, lat, lon = self.world[0, :, :] spectral, _, _ = self.world[:, 0, 0] spectral -= spectral[0] # offset from first pixel # Convert to radians lon = np.radians(lon) lat = np.radians(lat) # Find the dx and dy arrays from astropy.coordinates.angle_utilities import angular_separation dx = angular_separation(lon[:, :-1], lat[:, :-1], lon[:, 1:], lat[:, :-1]) dy = angular_separation(lon[:-1, :], lat[:-1, :], lon[1:, :], lat[1:, :]) # Find the cumulative offset - need to add a zero at the start x = np.zeros(self._data.shape[1:]) y = np.zeros(self._data.shape[1:]) x[:, 1:] = np.cumsum(np.degrees(dx), axis=1) y[1:, :] = np.cumsum(np.degrees(dy), axis=0) if isinstance(self._data, da.Array): x, y, spectral = da.broadcast_arrays(x[None,:,:], y[None,:,:], spectral[:,None,None]) # NOTE: we need to rechunk these to the actual data size, otherwise # the resulting arrays have a single chunk which can cause issues with # da.store (which writes data out in chunks) return (spectral.rechunk(self._data.chunksize), y.rechunk(self._data.chunksize), x.rechunk(self._data.chunksize)) else: x, y, spectral = np.broadcast_arrays(x[None,:,:], y[None,:,:], spectral[:,None,None]) return spectral, y, x @cached def _pix_size_slice(self, axis): """ Return the size of each pixel along any given direction. Assumes pixels have equal size. Also assumes that the spectral and spatial directions are separable, which is enforced throughout this code. Parameters ---------- axis : 0, 1, or 2 The axis along which to compute the pixel size Returns ------- Pixel size in units of either degrees or the appropriate spectral unit """ if axis == 0: # note that self._spectral_scale is required here because wcs # forces into units of m, m/s, or Hz return np.abs(self.wcs.pixel_scale_matrix[2,2]) * self._spectral_scale elif axis in (1,2): # the pixel size is a projection. I think the pixel_scale_matrix # must be symmetric, such that psm[axis,:]**2 == psm[:,axis]**2 return np.sum(self.wcs.pixel_scale_matrix[2-axis,:]**2)**0.5 else: raise ValueError("Cubes have 3 axes.") @cached def _pix_size(self): """ Return the size of each pixel along each direction, in world units Returns ------- dv, dy, dx : tuple of 3D arrays The extent of each pixel along each direction Notes ----- These arrays are broadcast, and are not memory intensive Each array is in the units of the corresponding wcs.cunit, but this is implicit (e.g., they are not astropy Quantity arrays) """ # First, scale along x direction xpix = np.linspace(-0.5, self._data.shape[2] - 0.5, self._data.shape[2] + 1) ypix = np.linspace(0., self._data.shape[1] - 1, self._data.shape[1]) xpix, ypix = np.meshgrid(xpix, ypix) zpix = np.zeros(xpix.shape) lon, lat, _ = self._wcs.all_pix2world(xpix, ypix, zpix, 0) # Convert to radians lon = np.radians(lon) lat = np.radians(lat) # Find the dx and dy arrays from astropy.coordinates.angle_utilities import angular_separation dx = angular_separation(lon[:, :-1], lat[:, :-1], lon[:, 1:], lat[:, :-1]) # Next, scale along y direction xpix = np.linspace(0., self._data.shape[2] - 1, self._data.shape[2]) ypix = np.linspace(-0.5, self._data.shape[1] - 0.5, self._data.shape[1] + 1) xpix, ypix = np.meshgrid(xpix, ypix) zpix = np.zeros(xpix.shape) lon, lat, _ = self._wcs.all_pix2world(xpix, ypix, zpix, 0) # Convert to radians lon = np.radians(lon) lat = np.radians(lat) # Find the dx and dy arrays from astropy.coordinates.angle_utilities import angular_separation dy = angular_separation(lon[:-1, :], lat[:-1, :], lon[1:, :], lat[1:, :]) # Next, spectral coordinates zpix = np.linspace(-0.5, self._data.shape[0] - 0.5, self._data.shape[0] + 1) xpix = np.zeros(zpix.shape) ypix = np.zeros(zpix.shape) _, _, spectral = self._wcs.all_pix2world(xpix, ypix, zpix, 0) # Take spectral units into account # order of operations here is crucial! If this is done after # broadcasting, the full array size is allocated, which is bad! dspectral = np.diff(spectral) * self._spectral_scale dx = np.abs(np.degrees(dx.reshape(1, dx.shape[0], dx.shape[1]))) dy = np.abs(np.degrees(dy.reshape(1, dy.shape[0], dy.shape[1]))) dspectral = np.abs(dspectral.reshape(-1, 1, 1)) dx, dy, dspectral = np.broadcast_arrays(dx, dy, dspectral) return dspectral, dy, dx def moment(self, order=0, axis=0, how='auto'): """ Compute moments along the spectral axis. Moments are defined as follows, where :math:`I` is the intensity in a channel and :math:`x` is the spectral coordinate: Moment 0: .. math:: M_0 \\int I dx Moment 1: .. math:: M_1 = \\frac{\\int I x dx}{M_0} Moment N: .. math:: M_N = \\frac{\\int I (x - M_1)^N dx}{M_0} .. warning:: Note that these follow the mathematical definitions of moments, and therefore the second moment will return a variance map. To get linewidth maps, you can instead use the :meth:`~SpectralCube.linewidth_fwhm` or :meth:`~SpectralCube.linewidth_sigma` methods. Parameters ---------- order : int The order of the moment to take. Default=0 axis : int The axis along which to compute the moment. Default=0 how : cube | slice | ray | auto How to compute the moment. All strategies give the same result, but certain strategies are more efficient depending on data size and layout. Cube/slice/ray iterate over decreasing subsets of the data, to conserve memory. Default='auto' Returns ------- map [, wcs] The moment map (numpy array) and, if wcs=True, the WCS object describing the map Notes ----- Generally, how='cube' is fastest for small cubes that easily fit into memory. how='slice' is best for most larger datasets. how='ray' is probably only a good idea for very large cubes whose data are contiguous over the axis of the moment map. For the first moment, the result for axis=1, 2 is the angular offset *relative to the cube face*. For axis=0, it is the *absolute* velocity/frequency of the first moment. """ if axis == 0 and order == 2: warnings.warn("Note that the second moment returned will be a " "variance map. To get a linewidth map, use the " "SpectralCube.linewidth_fwhm() or " "SpectralCube.linewidth_sigma() methods instead.", VarianceWarning) from ._moments import (moment_slicewise, moment_cubewise, moment_raywise, moment_auto) dispatch = dict(slice=moment_slicewise, cube=moment_cubewise, ray=moment_raywise, auto=moment_auto) if how not in dispatch: return ValueError("Invalid how. Must be in %s" % sorted(list(dispatch.keys()))) out = dispatch[how](self, order, axis) # apply units if order == 0: if axis == 0 and self._spectral_unit is not None: axunit = unit = self._spectral_unit else: axunit = unit = u.Unit(self._wcs.wcs.cunit[np2wcs[axis]]) out = u.Quantity(out, self.unit * axunit, copy=False) else: if axis == 0 and self._spectral_unit is not None: unit = self._spectral_unit ** max(order, 1) else: unit = u.Unit(self._wcs.wcs.cunit[np2wcs[axis]]) ** max(order, 1) out = u.Quantity(out, unit, copy=False) # special case: for order=1, axis=0, you usually want # the absolute velocity and not the offset if order == 1 and axis == 0: out += self.world[0, :, :][0] new_wcs = wcs_utils.drop_axis(self._wcs, np2wcs[axis]) meta = {'moment_order': order, 'moment_axis': axis, 'moment_method': how} meta.update(self._meta) return Projection(out, copy=False, wcs=new_wcs, meta=meta, header=self._nowcs_header) def moment0(self, axis=0, how='auto'): """ Compute the zeroth moment along an axis. See :meth:`moment`. """ return self.moment(axis=axis, order=0, how=how) def moment1(self, axis=0, how='auto'): """ Compute the 1st moment along an axis. For an explanation of the ``axis`` and ``how`` parameters, see :meth:`moment`. """ return self.moment(axis=axis, order=1, how=how) def moment2(self, axis=0, how='auto'): """ Compute the 2nd moment along an axis. For an explanation of the ``axis`` and ``how`` parameters, see :meth:`moment`. """ return self.moment(axis=axis, order=2, how=how) def linewidth_sigma(self, how='auto'): """ Compute a (sigma) linewidth map along the spectral axis. For an explanation of the ``how`` parameter, see :meth:`moment`. """ with np.errstate(invalid='ignore'): with warnings.catch_warnings(): warnings.simplefilter("ignore", VarianceWarning) return np.sqrt(self.moment2(how=how)) def linewidth_fwhm(self, how='auto'): """ Compute a (FWHM) linewidth map along the spectral axis. For an explanation of the ``how`` parameter, see :meth:`moment`. """ return self.linewidth_sigma() * SIGMA2FWHM @property def spectral_axis(self): """ A `~astropy.units.Quantity` array containing the central values of each channel along the spectral axis. """ return self.world[:, 0, 0][0].ravel() @property def velocity_convention(self): """ The `~astropy.units.equivalencies` that describes the spectral axis """ return spectral_axis.determine_vconv_from_ctype(self.wcs.wcs.ctype[self.wcs.wcs.spec]) def closest_spectral_channel(self, value): """ Find the index of the closest spectral channel to the specified spectral coordinate. Parameters ---------- value : :class:`~astropy.units.Quantity` The value of the spectral coordinate to search for. """ # TODO: we have to not compute this every time spectral_axis = self.spectral_axis try: value = value.to(spectral_axis.unit, equivalencies=u.spectral()) except u.UnitsError: if value.unit.is_equivalent(u.Hz, equivalencies=u.spectral()): if spectral_axis.unit.is_equivalent(u.m / u.s): raise u.UnitsError("Spectral axis is in velocity units and " "'value' is in frequency-equivalent units " "- use SpectralCube.with_spectral_unit " "first to convert the cube to frequency-" "equivalent units, or search for a " "velocity instead") else: raise u.UnitsError("Unexpected spectral axis units: {0}".format(spectral_axis.unit)) elif value.unit.is_equivalent(u.m / u.s): if spectral_axis.unit.is_equivalent(u.Hz, equivalencies=u.spectral()): raise u.UnitsError("Spectral axis is in frequency-equivalent " "units and 'value' is in velocity units " "- use SpectralCube.with_spectral_unit " "first to convert the cube to frequency-" "equivalent units, or search for a " "velocity instead") else: raise u.UnitsError("Unexpected spectral axis units: {0}".format(spectral_axis.unit)) else: raise u.UnitsError("'value' should be in frequency equivalent or velocity units (got {0})".format(value.unit)) # TODO: optimize the next line - just brute force for now return np.argmin(np.abs(spectral_axis - value)) def spectral_slab(self, lo, hi): """ Extract a new cube between two spectral coordinates Parameters ---------- lo, hi : :class:`~astropy.units.Quantity` The lower and upper spectral coordinate for the slab range. The units should be compatible with the units of the spectral axis. If the spectral axis is in frequency-equivalent units and you want to select a range in velocity, or vice-versa, you should first use :meth:`~spectral_cube.SpectralCube.with_spectral_unit` to convert the units of the spectral axis. """ # Find range of values for spectral axis ilo = self.closest_spectral_channel(lo) ihi = self.closest_spectral_channel(hi) if ilo > ihi: ilo, ihi = ihi, ilo ihi += 1 # Create WCS slab wcs_slab = self._wcs.deepcopy() wcs_slab.wcs.crpix[2] -= ilo # Create mask slab if self._mask is None: mask_slab = None else: try: mask_slab = self._mask[ilo:ihi, :, :] except NotImplementedError: warnings.warn("Mask slicing not implemented for " "{0} - dropping mask". format(self._mask.__class__.__name__), NotImplementedWarning ) mask_slab = None # Create new spectral cube slab = self._new_cube_with(data=self._data[ilo:ihi], wcs=wcs_slab, mask=mask_slab) # TODO: we could change the WCS to give a spectral axis in the # correct units as requested - so if the initial cube is in Hz and we # request a range in km/s, we could adjust the WCS to be in km/s # instead return slab def minimal_subcube(self, spatial_only=False): """ Return the minimum enclosing subcube where the mask is valid Parameters ---------- spatial_only: bool Only compute the minimal subcube in the spatial dimensions """ if self._mask is not None: return self[self.subcube_slices_from_mask(self._mask, spatial_only=spatial_only)] else: return self[:] def subcube_from_mask(self, region_mask): """ Given a mask, return the minimal subcube that encloses the mask Parameters ---------- region_mask: `~spectral_cube.masks.MaskBase` or boolean `numpy.ndarray` The mask with appropraite WCS or an ndarray with matched coordinates """ return self[self.subcube_slices_from_mask(region_mask)] def subcube_slices_from_mask(self, region_mask, spatial_only=False): """ Given a mask, return the slices corresponding to the minimum subcube that encloses the mask Parameters ---------- region_mask: `~spectral_cube.masks.MaskBase` or boolean `numpy.ndarray` The mask with appropriate WCS or an ndarray with matched coordinates spatial_only: bool Return only slices that affect the spatial dimensions; the spectral dimension will be left unchanged """ if not scipyOK: raise ImportError("Scipy could not be imported: this function won't work.") if isinstance(region_mask, np.ndarray): if is_broadcastable_and_smaller(region_mask.shape, self.shape): region_mask = BooleanArrayMask(region_mask, self._wcs) else: raise ValueError("Mask shape does not match cube shape.") include = region_mask.include(self._data, self._wcs, wcs_tolerance=self._wcs_tolerance) if not include.any(): return (slice(0),)*3 slices = ndimage.find_objects(np.broadcast_arrays(include, self._data)[0])[0] if spatial_only: slices = (slice(None), slices[1], slices[2]) return tuple(slices) def subcube(self, xlo='min', xhi='max', ylo='min', yhi='max', zlo='min', zhi='max', rest_value=None): """ Extract a sub-cube spatially and spectrally. When spatial WCS dimensions are given as an `~astropy.units.Quantity`, the spatial coordinates of the 'lo' and 'hi' corners are solved together. This minimizes WCS variations due to the sky curvature when slicing from a large (>1 deg) image. Parameters ---------- [xyz]lo/[xyz]hi : int or :class:`~astropy.units.Quantity` or ``min``/``max`` The endpoints to extract. If given as a quantity, will be interpreted as World coordinates. If given as a string or int, will be interpreted as pixel coordinates. """ dims = {'x': 2, 'y': 1, 'z': 0} limit_dict = {} limit_dict['zlo'] = 0 if zlo == 'min' else zlo limit_dict['zhi'] = self.shape[0] if zhi == 'max' else zhi # Specific warning for slicing a frequency axis with a velocity or # vice/versa if ((hasattr(zlo, 'unit') and not zlo.unit.is_equivalent(self.spectral_axis.unit)) or (hasattr(zhi, 'unit') and not zhi.unit.is_equivalent(self.spectral_axis.unit))): raise u.UnitsError("Spectral units are not equivalent to the " "spectral slice. Use `.with_spectral_unit` " "to convert to equivalent units first") # Solve for the spatial pixel indices together limit_dict_spat = wcs_utils.find_spatial_pixel_index(self, xlo, xhi, ylo, yhi) limit_dict.update(limit_dict_spat) # Handle the z (spectral) axis. This shouldn't change # much spacially, so solve one at a time # Track if the z axis values had units. Will need to make a +1 correction below united = [] for lim in limit_dict: if 'z' not in lim: continue limval = limit_dict[lim] if hasattr(limval, 'unit'): united.append(lim) dim = dims[lim[0]] sl = [slice(0,1)]*2 sl.insert(dim, slice(None)) sl = tuple(sl) spine = self.world[sl][dim] val = np.argmin(np.abs(limval-spine)) if limval > spine.max() or limval < spine.min(): log.warning("The limit {0} is out of bounds." " Using min/max instead.".format(lim)) limit_dict[lim] = val # Check spectral axis ordering. hi,lo = limit_dict['zhi'], limit_dict['zlo'] if hi < lo: # must have high > low limit_dict['zhi'], limit_dict['zlo'] = lo, hi if 'zhi' in united: # End-inclusive indexing: need to add one for the high slice # Only do this for converted values, not for pixel values # (i.e., if the xlo/ylo/zlo value had units) limit_dict['zhi'] += 1 for xx in 'zyx': if limit_dict[xx+'hi'] == limit_dict[xx+'lo']: # I think this should be unreachable now raise ValueError("The slice in the {0} direction will remove " "all elements. If you want a single-channel " "slice, you need a different approach." .format(xx)) slices = [slice(limit_dict[xx+'lo'], limit_dict[xx+'hi']) for xx in 'zyx'] slices = tuple(slices) log.debug('slices: {0}'.format(slices)) return self[slices] def subcube_from_ds9region(self, ds9_region, allow_empty=False): """ Extract a masked subcube from a ds9 region (only functions on celestial dimensions) Parameters ---------- ds9_region: str The DS9 region(s) to extract allow_empty: bool If this is False, an exception will be raised if the region contains no overlap with the cube """ import regions if isinstance(ds9_region, six.string_types): region_list = regions.DS9Parser(ds9_region).shapes.to_regions() else: raise TypeError("{0} should be a DS9 string".format(ds9_region)) return self.subcube_from_regions(region_list, allow_empty) def subcube_from_crtfregion(self, crtf_region, allow_empty=False): """ Extract a masked subcube from a CRTF region. Parameters ---------- crtf_region: str The CRTF region(s) string to extract allow_empty: bool If this is False, an exception will be raised if the region contains no overlap with the cube """ import regions if isinstance(crtf_region, six.string_types): region_list = regions.CRTFParser(crtf_region).shapes.to_regions() else: raise TypeError("{0} should be a CRTF string".format(crtf_region)) return self.subcube_from_regions(region_list, allow_empty) def subcube_from_regions(self, region_list, allow_empty=False): """ Extract a masked subcube from a list of ``regions.Region`` object (only functions on celestial dimensions) Parameters ---------- region_list: ``regions.Region`` list The region(s) to extract allow_empty: bool, optional If this is False, an exception will be raised if the region contains no overlap with the cube. Default is False. """ import regions # Convert every region to a `regions.PixelRegion` object. regs = [] for x in region_list: if isinstance(x, regions.SkyRegion): regs.append(x.to_pixel(self.wcs.celestial)) elif isinstance(x, regions.PixelRegion): regs.append(x) else: raise TypeError("'{}' should be `regions.Region` object".format(x)) # List of regions are converted to a `regions.CompoundPixelRegion` object. compound_region = _regionlist_to_single_region(regs) # Compound mask of all the regions. mask = compound_region.to_mask() # Collecting frequency/velocity range, velocity type and rest frequency # of each region. ranges = [x.meta.get('range', None) for x in regs] veltypes = [x.meta.get('veltype', None) for x in regs] restfreqs = [x.meta.get('restfreq', None) for x in regs] xlo, xhi, ylo, yhi = mask.bbox.ixmin, mask.bbox.ixmax, mask.bbox.iymin, mask.bbox.iymax # Negative indices will do bad things, like wrap around the cube # If xhi/yhi are negative, there is not overlap if (xhi < 0) or (yhi < 0): raise ValueError("Region is outside of cube.") if xlo < 0: xlo = 0 if ylo < 0: ylo = 0 # If None, then the whole spectral range of the cube is selected. if None in ranges: subcube = self.subcube(xlo=xlo, ylo=ylo, xhi=xhi, yhi=yhi) else: ranges = self._velocity_freq_conversion_regions(ranges, veltypes, restfreqs) zlo = min([x[0] for x in ranges]) zhi = max([x[1] for x in ranges]) slab = self.spectral_slab(zlo, zhi) subcube = slab.subcube(xlo=xlo, ylo=ylo, xhi=xhi, yhi=yhi) if any(dim == 0 for dim in subcube.shape): if allow_empty: warnings.warn("The derived subset is empty: the region does not" " overlap with the cube (but allow_empty=True).") else: raise ValueError("The derived subset is empty: the region does not" " overlap with the cube.") # cropping the mask from top left corner so that it fits the subcube. maskarray = mask.data[:subcube.shape[1], :subcube.shape[2]].astype('bool') masked_subcube = subcube.with_mask(BooleanArrayMask(maskarray, subcube.wcs, shape=subcube.shape)) # by using ceil / floor above, we potentially introduced a NaN buffer # that we can now crop out return masked_subcube.minimal_subcube(spatial_only=True) def _velocity_freq_conversion_regions(self, ranges, veltypes, restfreqs): """ Makes the spectral range of the regions compatible with the spectral convention of the cube. ranges: `~astropy.units.Quantity` object List of range(a list of max and min limits on the spectral axis) of each ``regions.Region`` object. veltypes: List of `str` It contains list of velocity convention that each region is following. The string should be a combination of the following elements: {'RADIO' | 'OPTICAL' | 'Z' | 'BETA' | 'GAMMA' | 'RELATIVISTIC' | None} An element can be `None` if veltype of the region is unknown and is assumed to take that of the cube. restfreqs: List of `~astropy.units.Quantity` It contains the rest frequency of each region. """ header = self.wcs.to_header() # Obtaining rest frequency of the cube in GHz. restfreq_cube = get_rest_value_from_wcs(self.wcs).to("GHz", equivalencies=u.spectral()) CTYPE3 = header['CTYPE3'] veltype_cube = determine_vconv_from_ctype(CTYPE3) veltype_equivalencies = dict(RADIO=u.doppler_radio, OPTICAL=u.doppler_optical, Z=doppler_z, BETA=doppler_beta, GAMMA=doppler_gamma, RELATIVISTIC=u.doppler_relativistic ) final_ranges = [] for range, veltype, restfreq in zip(ranges, veltypes, restfreqs): if restfreq is None: restfreq = restfreq_cube restfreq = restfreq.to("GHz", equivalencies=u.spectral()) if veltype not in veltype_equivalencies and veltype is not None: raise ValueError("Spectral Cube doesn't support {} this type of" "velocity".format(veltype)) veltype = veltype_equivalencies.get(veltype, veltype_cube) # Because there is chance that the veltype and rest frequency # of the region may not be the same as that of cube, we convert it # to frequency and then convert to the spectral unit of the cube. freq_range = (u.Quantity(range).to("GHz", equivalencies=veltype(restfreq))) final_ranges.append(freq_range.to(header['CUNIT3'], equivalencies=veltype_cube(restfreq_cube))) return final_ranges def _val_to_own_unit(self, value, operation='compare', tofrom='to', keepunit=False): """ Given a value, check if it has a unit. If it does, convert to the cube's unit. If it doesn't, raise an exception. """ if isinstance(value, SpectralCube): if self.unit.is_equivalent(value.unit): return value else: return value.to(self.unit) elif hasattr(value, 'unit'): if keepunit: return value.to(self.unit) else: return value.to(self.unit).value else: raise ValueError("Can only {operation} cube objects {tofrom}" " SpectralCubes or Quantities with " "a unit attribute." .format(operation=operation, tofrom=tofrom)) def __gt__(self, value): """ Return a LazyMask representing the inequality Parameters ---------- value : number The threshold """ value = self._val_to_own_unit(value) return LazyComparisonMask(operator.gt, value, data=self._data, wcs=self._wcs) def __ge__(self, value): value = self._val_to_own_unit(value) return LazyComparisonMask(operator.ge, value, data=self._data, wcs=self._wcs) def __le__(self, value): value = self._val_to_own_unit(value) return LazyComparisonMask(operator.le, value, data=self._data, wcs=self._wcs) def __lt__(self, value): value = self._val_to_own_unit(value) return LazyComparisonMask(operator.lt, value, data=self._data, wcs=self._wcs) def __eq__(self, value): value = self._val_to_own_unit(value) return LazyComparisonMask(operator.eq, value, data=self._data, wcs=self._wcs) def __hash__(self): return id(self) def __ne__(self, value): value = self._val_to_own_unit(value) return LazyComparisonMask(operator.ne, value, data=self._data, wcs=self._wcs) def __add__(self, value): if isinstance(value, SpectralCube): return self._cube_on_cube_operation(operator.add, value) else: value = self._val_to_own_unit(value, operation='add', tofrom='from', keepunit=True) return self._apply_everywhere(operator.add, value) def __sub__(self, value): if isinstance(value, SpectralCube): return self._cube_on_cube_operation(operator.sub, value) else: value = self._val_to_own_unit(value, operation='subtract', tofrom='from', keepunit=True) return self._apply_everywhere(operator.sub, value) def __mul__(self, value): if isinstance(value, SpectralCube): return self._cube_on_cube_operation(operator.mul, value) else: return self._apply_everywhere(operator.mul, value) def __truediv__(self, value): return self.__div__(value) def __div__(self, value): if isinstance(value, SpectralCube): return self._cube_on_cube_operation(operator.truediv, value) else: return self._apply_everywhere(operator.truediv, value) def __pow__(self, value): if isinstance(value, SpectralCube): return self._cube_on_cube_operation(operator.pow, value) else: return self._apply_everywhere(operator.pow, value) def to_yt(self, spectral_factor=1.0, nprocs=None, **kwargs): """ Convert a spectral cube to a yt object that can be further analyzed in yt. Parameters ---------- spectral_factor : float, optional Factor by which to stretch the spectral axis. If set to 1, one pixel in spectral coordinates is equivalent to one pixel in spatial coordinates. If using yt 3.0 or later, additional keyword arguments will be passed onto yt's ``FITSDataset`` constructor. See the yt documentation (http://yt-project.org/docs/3.0/examining/loading_data.html?#fits-data) for details on options for reading FITS data. """ import yt if (('dev' in yt.__version__) or (LooseVersion(yt.__version__) >= LooseVersion('3.0'))): # yt has updated their FITS data set so that only the SpectralCube # variant takes spectral_factor try: from yt.frontends.fits.api import SpectralCubeFITSDataset as FITSDataset except ImportError: from yt.frontends.fits.api import FITSDataset from yt.units.unit_object import UnitParseError data = self._get_filled_data(fill=0.) if isinstance(data, da.Array): # Note that >f8 can cause issues with yt, and for visualization # we don't really need the full 64-bit of floating point # precision, so we cast to float32. data = data.astype(np.float32).compute() hdu = PrimaryHDU(data, header=self.wcs.to_header()) units = str(self.unit.to_string()) hdu.header["BUNIT"] = units hdu.header["BTYPE"] = "flux" ds = FITSDataset(hdu, nprocs=nprocs, spectral_factor=spectral_factor, **kwargs) # Check to make sure the units are legit try: ds.quan(1.0,units) except UnitParseError: raise RuntimeError("The unit %s was not parsed by yt. " % units+ "Check to make sure it is correct.") else: from yt.mods import load_uniform_grid data = {'flux': self._get_filled_data(fill=0.).transpose()} nz, ny, nx = self.shape if nprocs is None: nprocs = 1 bbox = np.array([[0.5,float(nx)+0.5], [0.5,float(ny)+0.5], [0.5,spectral_factor*float(nz)+0.5]]) ds = load_uniform_grid(data, [nx,ny,nz], 1., bbox=bbox, nprocs=nprocs, periodicity=(False, False, False)) return ytCube(self, ds, spectral_factor=spectral_factor) def to_glue(self, name=None, glue_app=None, dataset=None, start_gui=True): """ Send data to a new or existing Glue application Parameters ---------- name : str or None The name of the dataset within Glue. If None, defaults to 'SpectralCube'. If a dataset with the given name already exists, a new dataset with "_" appended will be added instead. glue_app : GlueApplication or None A glue application to send the data to. If this is not specified, a new glue application will be started if one does not already exist for this cube. Otherwise, the data will be sent to the existing glue application, `self._glue_app`. dataset : glue.core.Data or None An existing Data object to add the cube to. This is a good way to compare cubes with the same dimensions. Supercedes ``glue_app`` start_gui : bool Start the GUI when this is run. Set to `False` for testing. """ if name is None: name = 'SpectralCube' from glue.app.qt import GlueApplication from glue.core import DataCollection, Data from glue.core.coordinates import coordinates_from_header try: from glue.viewers.image.qt.data_viewer import ImageViewer except ImportError: from glue.viewers.image.qt.viewer_widget import ImageWidget as ImageViewer if dataset is not None: if name in [d.label for d in dataset.components]: name = name+"_" dataset[name] = self else: result = Data(label=name) result.coords = coordinates_from_header(self.header) result.add_component(self, name) if glue_app is None: if hasattr(self,'_glue_app'): glue_app = self._glue_app else: # Start a new glue session. This will quit when done. # I don't think the return statement is ever reached, based on # past attempts [@ChrisBeaumont - chime in here if you'd like] dc = DataCollection([result]) #start Glue ga = self._glue_app = GlueApplication(dc) self._glue_viewer = ga.new_data_viewer(ImageViewer, data=result) if start_gui: self._glue_app.start() return self._glue_app glue_app.add_datasets(self._glue_app.data_collection, result) def to_pvextractor(self): """ Open the cube in a quick viewer written in matplotlib that allows you to create PV extractions within the GUI """ from pvextractor.gui import PVSlicer return PVSlicer(self) def to_ds9(self, ds9id=None, newframe=False): """ Send the data to ds9 (this will create a copy in memory) Parameters ---------- ds9id: None or string The DS9 session ID. If 'None', a new one will be created. To find your ds9 session ID, open the ds9 menu option File:XPA:Information and look for the XPA_METHOD string, e.g. ``XPA_METHOD: 86ab2314:60063``. You would then calll this function as ``cube.to_ds9('86ab2314:60063')`` newframe: bool Send the cube to a new frame or to the current frame? """ try: import ds9 except ImportError: import pyds9 as ds9 if ds9id is None: dd = ds9.DS9(start=True) else: dd = ds9.DS9(target=ds9id, start=False) if newframe: dd.set('frame new') dd.set_pyfits(self.hdulist) return dd @property def header(self): log.debug("Creating header") header = super(BaseSpectralCube, self).header # Preserve the cube's spectral units # (if CUNIT3 is not in the header, it is whatever that type's default unit is) if 'CUNIT3' in header and self._spectral_unit != u.Unit(header['CUNIT3']): header['CDELT3'] *= self._spectral_scale header['CRVAL3'] *= self._spectral_scale header['CUNIT3'] = self._spectral_unit.to_string(format='FITS') return header @property def hdu(self): """ HDU version of self """ log.debug("Creating HDU") hdu = PrimaryHDU(self.filled_data[:].value, header=self.header) return hdu @property def hdulist(self): return HDUList(self.hdu) @warn_slow def to(self, unit, equivalencies=()): """ Return the cube converted to the given unit (assuming it is equivalent). If conversion was required, this will be a copy, otherwise it will """ if not isinstance(unit, u.Unit): unit = u.Unit(unit) if unit == self.unit: # No copying return self # Create the tuple of unit conversions needed. factor = cube_utils.bunit_converters(self, unit, equivalencies=equivalencies) # special case: array in equivalencies # (I don't think this should have to be special cased, but I don't know # how to manipulate broadcasting rules any other way) if hasattr(factor, '__len__') and len(factor) == len(self): return self._new_cube_with(data=self._data*factor[:,None,None], unit=unit) else: return self._new_cube_with(data=self._data*factor, unit=unit) def find_lines(self, velocity_offset=None, velocity_convention=None, rest_value=None, **kwargs): """ Using astroquery's splatalogue interface, search for lines within the spectral band. See `astroquery.splatalogue.Splatalogue` for information on keyword arguments Parameters ---------- velocity_offset : u.km/u.s equivalent An offset by which the spectral axis should be shifted before searching splatalogue. This value will be *added* to the velocity, so if you want to redshift a spectrum, make this value positive, and if you want to un-redshift it, make this value negative. velocity_convention : 'radio', 'optical', 'relativistic' The doppler convention to pass to `with_spectral_unit` rest_value : u.GHz equivalent The rest frequency (or wavelength or energy) to be passed to `with_spectral_unit` """ warnings.warn("The line-finding routine is experimental. Please " "report bugs on the Issues page: " "https://github.com/radio-astro-tools/spectral-cube/issues", ExperimentalImplementationWarning ) from astroquery.splatalogue import Splatalogue if velocity_convention in DOPPLER_CONVENTIONS: velocity_convention = DOPPLER_CONVENTIONS[velocity_convention] if velocity_offset is not None: newspecaxis = self.with_spectral_unit(u.km/u.s, velocity_convention=velocity_convention, rest_value=rest_value).spectral_axis spectral_axis = (newspecaxis + velocity_offset).to(u.GHz, velocity_convention(rest_value)) else: spectral_axis = self.spectral_axis.to(u.GHz) numin,numax = spectral_axis.min(), spectral_axis.max() log.log(19, "Min/max frequency: {0},{1}".format(numin, numax)) result = Splatalogue.query_lines(numin, numax, **kwargs) return result @warn_slow def reproject(self, header, order='bilinear', use_memmap=False, filled=True): """ Spatially reproject the cube into a new header. Fills the data with the cube's ``fill_value`` to replace bad values before reprojection. If you want to reproject a cube both spatially and spectrally, you need to use `spectral_interpolate` as well. .. warning:: The current implementation of ``reproject`` requires that the whole cube be loaded into memory. Issue #506 notes that this is a problem, and it is on our to-do list to fix. Parameters ---------- header : `astropy.io.fits.Header` A header specifying a cube in valid WCS order : int or str, optional The order of the interpolation (if ``mode`` is set to ``'interpolation'``). This can be either one of the following strings: * 'nearest-neighbor' * 'bilinear' * 'biquadratic' * 'bicubic' or an integer. A value of ``0`` indicates nearest neighbor interpolation. use_memmap : bool If specified, a memory mapped temporary file on disk will be written to rather than storing the intermediate spectra in memory. filled : bool Fill the masked values with the cube's fill value before reprojection? Note that setting ``filled=False`` will use the raw data array, which can be a workaround that prevents loading large data into memory. """ try: from reproject.version import version except ImportError: raise ImportError("Requires the reproject package to be" " installed.") # Need version > 0.2 to work with cubes, >= 0.5 for memmap from distutils.version import LooseVersion if LooseVersion(version) < "0.5": raise Warning("Requires version >=0.5 of reproject. The current " "version is: {}".format(version)) elif LooseVersion(version) >= "0.6": reproj_kwargs = {} else: reproj_kwargs = {'independent_celestial_slices': True} from reproject import reproject_interp # TODO: Find the minimal subcube that contains the header and only reproject that # (see FITS_tools.regrid_cube for a guide on how to do this) newwcs = wcs.WCS(header) shape_out = tuple([header['NAXIS{0}'.format(i + 1)] for i in range(header['NAXIS'])][::-1]) if filled: data = self.unitless_filled_data[:] else: data = self._data if use_memmap: if data.dtype.itemsize not in (4,8): raise ValueError("Data must be float32 or float64 to be " "reprojected. Other data types need some " "kind of additional memory handling.") # note: requires reproject from December 2018 or later outarray = np.memmap(filename='output.np', mode='w+', shape=tuple(shape_out), dtype='float64' if data.dtype.itemsize == 8 else 'float32') else: outarray = None newcube, newcube_valid = reproject_interp((data, self.header), newwcs, output_array=outarray, shape_out=shape_out, order=order, **reproj_kwargs) return self._new_cube_with(data=newcube, wcs=newwcs, mask=BooleanArrayMask(newcube_valid.astype('bool'), newwcs), meta=self.meta, ) @parallel_docstring def spatial_smooth_median(self, ksize, update_function=None, raise_error_jybm=True, **kwargs): """ Smooth the image in each spatial-spatial plane of the cube using a median filter. Parameters ---------- ksize : int Size of the median filter (scipy.ndimage.filters.median_filter) update_function : method Method that is called to update an external progressbar If provided, it disables the default `astropy.utils.console.ProgressBar` raise_error_jybm : bool, optional Raises a `~spectral_cube.utils.BeamUnitsError` when smoothing a cube in Jy/beam units, since the brightness is dependent on the spatial resolution. kwargs : dict Passed to the convolve function """ if not scipyOK: raise ImportError("Scipy could not be imported: this function won't work.") self.check_jybeam_smoothing(raise_error_jybm=raise_error_jybm) def _msmooth_image(im, **kwargs): return ndimage.filters.median_filter(im, size=ksize, **kwargs) newcube = self.apply_function_parallel_spatial(_msmooth_image, **kwargs) return newcube @parallel_docstring def spatial_smooth(self, kernel, convolve=convolution.convolve, raise_error_jybm=True, **kwargs): """ Smooth the image in each spatial-spatial plane of the cube. Parameters ---------- kernel : `~astropy.convolution.Kernel2D` A 2D kernel from astropy convolve : function The astropy convolution function to use, either `astropy.convolution.convolve` or `astropy.convolution.convolve_fft` raise_error_jybm : bool, optional Raises a `~spectral_cube.utils.BeamUnitsError` when smoothing a cube in Jy/beam units, since the brightness is dependent on the spatial resolution. kwargs : dict Passed to the convolve function """ self.check_jybeam_smoothing(raise_error_jybm=raise_error_jybm) def _gsmooth_image(img, **kwargs): """ Helper function to smooth an image """ return convolve(img, kernel, normalize_kernel=True, **kwargs) newcube = self.apply_function_parallel_spatial(_gsmooth_image, **kwargs) return newcube @parallel_docstring def spectral_smooth_median(self, ksize, use_memmap=True, verbose=0, num_cores=None, **kwargs): """ Smooth the cube along the spectral dimension Parameters ---------- ksize : int Size of the median filter (scipy.ndimage.filters.median_filter) verbose : int Verbosity level to pass to joblib kwargs : dict Not used at the moment. """ if not scipyOK: raise ImportError("Scipy could not be imported: this function won't work.") return self.apply_function_parallel_spectral(ndimage.filters.median_filter, size=ksize, verbose=verbose, num_cores=num_cores, use_memmap=use_memmap, **kwargs) def _apply_function_parallel_base(self, iteration_data, function, applicator, num_cores=None, verbose=0, use_memmap=True, parallel=False, memmap_dir=None, update_function=None, **kwargs ): """ Apply a function in parallel using the ``applicator`` function. The function will be performed on data with masked values replaced with the cube's fill value. Parameters ---------- iteration_data : generator The data to be iterated over in the format expected by ``applicator`` function : function The function to apply in the spectral dimension. It must take two arguments: an array representing a spectrum and a boolean array representing the mask. It may also accept ``**kwargs``. The function must return an object with the same shape as the input spectrum. applicator : function Either ``_apply_spatial_function`` or ``_apply_spectral_function``, a tool to handle the iteration data and send it to the ``function`` appropriately. num_cores : int or None The number of cores to use if running in parallel. Should be >1 if ``parallel==True`` and cannot be >1 if ``parallel==False`` verbose : int Verbosity level to pass to joblib use_memmap : bool If specified, a memory mapped temporary file on disk will be written to rather than storing the intermediate spectra in memory. parallel : bool If set to ``False``, will force the use of a single thread instead of using ``joblib``. update_function : function A callback function to call on each iteration of the application. It should not accept any arguments. For example, this can be ``Progressbar.update`` or some function that prints a status report. The function *must* be picklable if ``parallel==True``. kwargs : dict Passed to ``function`` """ if use_memmap: ntf = tempfile.NamedTemporaryFile(dir=memmap_dir) outcube = np.memmap(ntf, mode='w+', shape=self.shape, dtype=np.float) else: if self._is_huge and not self.allow_huge_operations: raise ValueError("Applying a function without ``use_memmap`` " "requires loading the whole array into " "memory *twice*, which can overload the " "machine's memory for large cubes. Either " "set ``use_memmap=True`` or set " "``cube.allow_huge_operations=True`` to " "override this restriction.") outcube = np.empty(shape=self.shape, dtype=np.float) if num_cores == 1 and parallel: warnings.warn("parallel=True was specified but num_cores=1. " "Joblib will be used to run the task with a " "single thread.") elif num_cores is not None and num_cores > 1 and not parallel: raise ValueError("parallel execution was not requested, but " "multiple cores were: these are incompatible " "options. Either specify num_cores=1 or " "parallel=True") if parallel and use_memmap: # it is not possible to run joblib parallelization without memmap try: import joblib from joblib._parallel_backends import MultiprocessingBackend from joblib import register_parallel_backend, parallel_backend from joblib import Parallel, delayed if update_function is not None: # https://stackoverflow.com/questions/38483874/intermediate-results-from-joblib class MultiCallback: def __init__(self, *callbacks): self.callbacks = [cb for cb in callbacks if cb] def __call__(self, out): for cb in self.callbacks: cb(out) class Callback_Backend(MultiprocessingBackend): def callback(self, result): update_function() # Overload apply_async and set callback=self.callback def apply_async(self, func, callback=None): cbs = MultiCallback(callback, self.callback) return super().apply_async(func, cbs) joblib.register_parallel_backend('custom', Callback_Backend, make_default=True) Parallel(n_jobs=num_cores, verbose=verbose, max_nbytes=None)(delayed(applicator)(arg, outcube, function, **kwargs) for arg in iteration_data) except ImportError: if num_cores is not None and num_cores > 1: warnings.warn("Could not import joblib. Will run in serial.", warnings.ImportWarning) parallel = False # this isn't an else statement because we want to catch the case where # the above clause fails on ImportError if not parallel or not use_memmap: if update_function is not None: pbu = update_function elif verbose > 0: progressbar = ProgressBar(self.shape[1]*self.shape[2]) pbu = progressbar.update else: pbu = object for arg in iteration_data: applicator(arg, outcube, function, **kwargs) pbu() # TODO: do something about the mask? newcube = self._new_cube_with(data=outcube, wcs=self.wcs, mask=self.mask, meta=self.meta, fill_value=self.fill_value) return newcube def apply_function_parallel_spatial(self, function, num_cores=None, verbose=0, use_memmap=True, parallel=True, **kwargs ): """ Apply a function in parallel along the spatial dimension. The function will be performed on data with masked values replaced with the cube's fill value. Parameters ---------- function : function The function to apply in the spatial dimension. It must take two arguments: an array representing an image and a boolean array representing the mask. It may also accept ``**kwargs``. The function must return an object with the same shape as the input spectrum. num_cores : int or None The number of cores to use if running in parallel verbose : int Verbosity level to pass to joblib use_memmap : bool If specified, a memory mapped temporary file on disk will be written to rather than storing the intermediate spectra in memory. parallel : bool If set to ``False``, will force the use of a single core without using ``joblib``. kwargs : dict Passed to ``function`` """ shape = self.shape data = self.unitless_filled_data # 'images' is a generator # the boolean check will skip the function for bad spectra images = ((data[ii,:,:], self.mask.include(view=(ii, slice(None), slice(None))), ii, ) for ii in range(shape[0])) return self._apply_function_parallel_base(images, function, applicator=_apply_spatial_function, verbose=verbose, parallel=parallel, num_cores=num_cores, use_memmap=use_memmap, **kwargs) def apply_function_parallel_spectral(self, function, num_cores=None, verbose=0, use_memmap=True, parallel=True, **kwargs ): """ Apply a function in parallel along the spectral dimension. The function will be performed on data with masked values replaced with the cube's fill value. Parameters ---------- function : function The function to apply in the spectral dimension. It must take two arguments: an array representing a spectrum and a boolean array representing the mask. It may also accept ``**kwargs``. The function must return an object with the same shape as the input spectrum. num_cores : int or None The number of cores to use if running in parallel verbose : int Verbosity level to pass to joblib use_memmap : bool If specified, a memory mapped temporary file on disk will be written to rather than storing the intermediate spectra in memory. parallel : bool If set to ``False``, will force the use of a single core without using ``joblib``. kwargs : dict Passed to ``function`` """ shape = self.shape data = self.unitless_filled_data # 'spectra' is a generator # the boolean check will skip the function for bad spectra # TODO: should spatial good/bad be cached? spectra = ((data[:,jj,ii], self.mask.include(view=(slice(None), jj, ii)), ii, jj, ) for jj in range(shape[1]) for ii in range(shape[2])) return self._apply_function_parallel_base(iteration_data=spectra, function=function, applicator=_apply_spectral_function, use_memmap=use_memmap, parallel=parallel, verbose=verbose, num_cores=num_cores, **kwargs ) @parallel_docstring def sigma_clip_spectrally(self, threshold, verbose=0, use_memmap=True, num_cores=None, **kwargs): """ Run astropy's sigma clipper along the spectral axis, converting all bad (excluded) values to NaN. Parameters ---------- threshold : float The ``sigma`` parameter in `astropy.stats.sigma_clip`, which refers to the number of sigma above which to cut. verbose : int Verbosity level to pass to joblib """ return self.apply_function_parallel_spectral(stats.sigma_clip, sigma=threshold, axis=0, # changes behavior of sigmaclip num_cores=num_cores, use_memmap=use_memmap, verbose=verbose, **kwargs) @parallel_docstring def spectral_smooth(self, kernel, convolve=convolution.convolve, verbose=0, use_memmap=True, num_cores=None, **kwargs): """ Smooth the cube along the spectral dimension Note that the mask is left unchanged in this operation. Parameters ---------- kernel : `~astropy.convolution.Kernel1D` A 1D kernel from astropy convolve : function The astropy convolution function to use, either `astropy.convolution.convolve` or `astropy.convolution.convolve_fft` verbose : int Verbosity level to pass to joblib kwargs : dict Passed to the convolve function """ if isinstance(kernel.array, u.Quantity): raise u.UnitsError("The convolution kernel should be defined " "without a unit.") return self.apply_function_parallel_spectral(convolve, kernel=kernel, normalize_kernel=True, num_cores=num_cores, use_memmap=use_memmap, verbose=verbose, **kwargs) def spectral_interpolate(self, spectral_grid, suppress_smooth_warning=False, fill_value=None, update_function=None): """Resample the cube spectrally onto a specific grid Parameters ---------- spectral_grid : array An array of the spectral positions to regrid onto suppress_smooth_warning : bool If disabled, a warning will be raised when interpolating onto a grid that does not nyquist sample the existing grid. Disable this if you have already appropriately smoothed the data. fill_value : float Value for extrapolated spectral values that lie outside of the spectral range defined in the original data. The default is to use the nearest spectral channel in the cube. update_function : method Method that is called to update an external progressbar If provided, it disables the default `astropy.utils.console.ProgressBar` Returns ------- cube : SpectralCube """ inaxis = self.spectral_axis.to(spectral_grid.unit) indiff = np.mean(np.diff(inaxis)) outdiff = np.mean(np.diff(spectral_grid)) # account for reversed axes if outdiff < 0: spectral_grid = spectral_grid[::-1] outdiff = np.mean(np.diff(spectral_grid)) outslice = slice(None, None, -1) else: outslice = slice(None, None, 1) cubedata = self.filled_data specslice = slice(None) if indiff >= 0 else slice(None, None, -1) inaxis = inaxis[specslice] indiff = np.mean(np.diff(inaxis)) # insanity checks if indiff < 0 or outdiff < 0: raise ValueError("impossible.") assert np.all(np.diff(spectral_grid) > 0) assert np.all(np.diff(inaxis) > 0) np.testing.assert_allclose(np.diff(spectral_grid), outdiff, err_msg="Output grid must be linear") if outdiff > 2 * indiff and not suppress_smooth_warning: warnings.warn("Input grid has too small a spacing. The data should " "be smoothed prior to resampling.", SmoothingWarning ) newcube = np.empty([spectral_grid.size, self.shape[1], self.shape[2]], dtype=cubedata[:1, 0, 0].dtype) newmask = np.empty([spectral_grid.size, self.shape[1], self.shape[2]], dtype='bool') yy,xx = np.indices(self.shape[1:]) if update_function is None: pb = ProgressBar(xx.size) update_function = pb.update for ix, iy in (zip(xx.flat, yy.flat)): mask = self.mask.include(view=(specslice, iy, ix)) if any(mask): newcube[outslice,iy,ix] = \ np.interp(spectral_grid.value, inaxis.value, cubedata[specslice,iy,ix].value, left=fill_value, right=fill_value) if all(mask): newmask[:,iy,ix] = True else: interped = np.interp(spectral_grid.value, inaxis.value, mask) > 0 newmask[outslice,iy,ix] = interped else: newmask[:, iy, ix] = False newcube[:, iy, ix] = np.NaN update_function() newwcs = self.wcs.deepcopy() newwcs.wcs.crpix[2] = 1 newwcs.wcs.crval[2] = spectral_grid[0].value if outslice.step > 0 \ else spectral_grid[-1].value newwcs.wcs.cunit[2] = spectral_grid.unit.to_string('FITS') newwcs.wcs.cdelt[2] = outdiff.value if outslice.step > 0 \ else -outdiff.value newwcs.wcs.set() newbmask = BooleanArrayMask(newmask, wcs=newwcs) newcube = self._new_cube_with(data=newcube, wcs=newwcs, mask=newbmask, meta=self.meta, fill_value=self.fill_value) return newcube @warn_slow def convolve_to(self, beam, convolve=convolution.convolve_fft, update_function=None, **kwargs): """ Convolve each channel in the cube to a specified beam .. warning:: The current implementation of ``convolve_to`` creates an in-memory copy of the whole cube to store the convolved data. Issue #506 notes that this is a problem, and it is on our to-do list to fix. Parameters ---------- beam : `radio_beam.Beam` The beam to convolve to convolve : function The astropy convolution function to use, either `astropy.convolution.convolve` or `astropy.convolution.convolve_fft` update_function : method Method that is called to update an external progressbar If provided, it disables the default `astropy.utils.console.ProgressBar` kwargs : dict Keyword arguments to pass to the convolution function Returns ------- cube : `SpectralCube` A SpectralCube with a single ``beam`` """ # Check if the beams are the same. if beam == self.beam: warnings.warn("The given beam is identical to the current beam. " "Skipping convolution.") return self pixscale = wcs.utils.proj_plane_pixel_area(self.wcs.celestial)**0.5*u.deg convolution_kernel = beam.deconvolve(self.beam).as_kernel(pixscale) # Scale Jy/beam units by the change in beam size if self.unit.is_equivalent(u.Jy / u.beam): beam_ratio_factor = (beam.sr / self.beam.sr).value else: beam_ratio_factor = 1. # See #631: kwargs get passed within self.apply_function_parallel_spatial def convfunc(img, **kwargs): return convolve(img, convolution_kernel, normalize_kernel=True, **kwargs) * beam_ratio_factor newcube = self.apply_function_parallel_spatial(convfunc, **kwargs).with_beam(beam, raise_error_jybm=False) return newcube def mask_channels(self, goodchannels): """ Helper function to mask out channels. This function is equivalent to adding a mask with ``cube[view]`` where ``view`` is broadcastable to the cube shape, but it accepts 1D arrays that are not normally broadcastable. Parameters ---------- goodchannels : array A 1D boolean array declaring which channels should be kept. Returns ------- cube : `SpectralCube` A cube with the specified channels masked """ goodchannels = np.asarray(goodchannels, dtype='bool') if goodchannels.ndim != 1: raise ValueError("goodchannels mask must be one-dimensional") if goodchannels.size != self.shape[0]: raise ValueError("goodchannels must have a length equal to the " "cube's spectral dimension.") return self.with_mask(goodchannels[:,None,None]) @warn_slow def downsample_axis(self, factor, axis, estimator=np.nanmean, truncate=False, use_memmap=True, progressbar=True): """ Downsample the cube by averaging over *factor* pixels along an axis. Crops right side if the shape is not a multiple of factor. The WCS will be 'downsampled' by the specified factor as well. If the downsample factor is odd, there will be an offset in the WCS. There is both an in-memory and a memory-mapped implementation; the default is to use the memory-mapped version. Technically, the 'large data' warning doesn't apply when using the memory-mapped version, but the warning is still there anyway. Parameters ---------- myarr : `~numpy.ndarray` The array to downsample factor : int The factor to downsample by axis : int The axis to downsample along estimator : function defaults to mean. You can downsample by summing or something else if you want a different estimator (e.g., downsampling error: you want to sum & divide by sqrt(n)) truncate : bool Whether to truncate the last chunk or average over a smaller number. e.g., if you downsample [1,2,3,4] by a factor of 3, you could get either [2] or [2,4] if truncate is True or False, respectively. use_memmap : bool Use a memory map on disk to avoid loading the whole cube into memory (several times)? If set, the warning about large cubes can be ignored (though you still have to override the warning) progressbar : bool Include a progress bar? Only works with ``use_memmap=True`` """ def makeslice(startpoint,axis=axis,step=factor): # make empty slices view = [slice(None) for ii in range(self.ndim)] # then fill the appropriate slice view[axis] = slice(startpoint,None,step) return tuple(view) # size of the dimension of interest xs = self.shape[axis] if not use_memmap: if xs % int(factor) != 0: if truncate: view = [slice(None) for ii in range(self.ndim)] view[axis] = slice(None,xs-(xs % int(factor))) view = tuple(view) crarr = self.unitless_filled_data[view] mask = self.mask[view].include() else: extension_shape = list(self.shape) extension_shape[axis] = (factor - xs % int(factor)) extension = np.empty(extension_shape) * np.nan crarr = np.concatenate((self.unitless_filled_data[:], extension), axis=axis) extension[:] = 0 mask = np.concatenate((self.mask.include(), extension), axis=axis) else: crarr = self.unitless_filled_data[:] mask = self.mask.include() # The extra braces here are crucial: We're adding an extra dimension so we # can average across it stacked_array = np.concatenate([[crarr[makeslice(ii)]] for ii in range(factor)]) dsarr = estimator(stacked_array, axis=0) if not isinstance(mask, np.ndarray): raise TypeError("Mask is of wrong data type") stacked_mask = np.concatenate([[mask[makeslice(ii)]] for ii in range(factor)]) mask = np.any(stacked_mask, axis=0) else: def makeslice_local(startpoint, axis=axis, nsteps=factor): # make empty slices view = [slice(None) for ii in range(self.ndim)] # then fill the appropriate slice view[axis] = slice(startpoint,startpoint+nsteps,1) return tuple(view) newshape = list(self.shape) newshape[axis] = (newshape[axis]//factor + ((1-int(truncate)) * (xs % int(factor) != 0))) newshape = tuple(newshape) if progressbar: progressbar = ProgressBar else: progressbar = lambda x: x # Create a view that will add a blank newaxis at the right spot view_newaxis = [slice(None) for ii in range(self.ndim)] view_newaxis[axis] = None view_newaxis = tuple(view_newaxis) ntf = tempfile.NamedTemporaryFile() dsarr = np.memmap(ntf, mode='w+', shape=newshape, dtype=np.float) ntf2 = tempfile.NamedTemporaryFile() mask = np.memmap(ntf2, mode='w+', shape=newshape, dtype=np.bool) for ii in progressbar(range(newshape[axis])): view_fulldata = makeslice_local(ii*factor) view_newdata = makeslice_local(ii, nsteps=1) to_average = self.unitless_filled_data[view_fulldata] to_anyfy = self.mask[view_fulldata].include() dsarr[view_newdata] = estimator(to_average, axis)[view_newaxis] mask[view_newdata] = np.any(to_anyfy, axis).astype('bool')[view_newaxis] # the slice should just start at zero; we had factor//2 here earlier, # and that was an error that probably half-compensated for an error in # wcs_utils view = makeslice(0) newwcs = wcs_utils.slice_wcs(self.wcs, view, shape=self.shape) newwcs._naxis = list(self.shape) # this is an assertion to ensure that the WCS produced is valid # (this is basically a regression test for #442) assert newwcs[:, slice(None), slice(None)] assert len(newwcs._naxis) == 3 return self._new_cube_with(data=dsarr, wcs=newwcs, mask=BooleanArrayMask(mask, wcs=newwcs)) def plot_channel_maps(self, nx, ny, channels, contourkwargs={}, output_file=None, fig=None, fig_smallest_dim_inches=8, decimals=3, zoom=1, textcolor=None, cmap='gray_r', tighten=False, textxloc=0.5, textyloc=0.9, savefig_kwargs={}, **kwargs): """ Make channel maps from a spectral cube Parameters ---------- input_file : str Name of the input spectral cube nx, ny : int Number of sub-plots in the x and y direction channels : list List of channels to show cmap : str The name of a colormap to use for the ``imshow`` colors contourkwargs : dict Keyword arguments passed to ``contour`` textcolor : None or str Color of the label text to overlay. If ``None``, will be determined automatically. If ``'notext'``, no text will be added. textxloc : float textyloc : float Text label X,Y-location in axis fraction units output_file : str Name of the matplotlib plot fig : matplotlib figure The figure object to plot onto. Will be overridden to enforce a specific aspect ratio. fig_smallest_dim_inches : float The size of the smallest dimension (either width or height) of the figure in inches. The other dimension will be selected based on the aspect ratio of the data: it cannot be a free parameter. decimals : int, optional Number of decimal places to show in spectral value zoom : int, optional How much to zoom in. In future versions of this function, the pointing center will be customizable. tighten : bool Call ``plt.tight_layout()`` after plotting? savefig_kwargs : dict Keyword arguments to pass to ``savefig`` (e.g., ``bbox_inches='tight'``) kwargs : dict Passed to ``imshow`` """ import matplotlib.pyplot as plt from matplotlib.gridspec import GridSpec cmap = getattr(plt.cm, cmap) if len(channels) != nx * ny: raise ValueError("Number of channels should be equal to nx * ny") # Read in spectral cube and get spectral axis spectral_axis = self.spectral_axis sizey, sizex = self.shape[1:] cenx = sizex / 2. ceny = sizey / 2. aspect_ratio = self.shape[2]/float(self.shape[1]) gridratio = ny / float(nx) * aspect_ratio if gridratio > 1: ysize = fig_smallest_dim_inches*gridratio xsize = fig_smallest_dim_inches else: xsize = fig_smallest_dim_inches*gridratio ysize = fig_smallest_dim_inches if fig is None: fig = plt.figure(figsize=(xsize, ysize)) else: fig.set_figheight(ysize) fig.set_figwidth(xsize) # unclear if needed #fig.subplots_adjust(margin,margin,1.-margin,1.-margin,0.,0.) axis_list = [] gs = GridSpec(ny, nx, figure=fig, hspace=0, wspace=0) for ichannel, channel in enumerate(channels): slc = self[channel,:,:] ax = plt.subplot(gs[ichannel], projection=slc.wcs) im = ax.imshow(slc.value, origin='lower', cmap=cmap, **kwargs) if contourkwargs: ax.contour(slc.value, **contourkwargs) ax.set_xlim(cenx - cenx / zoom, cenx + cenx / zoom) ax.set_ylim(ceny - ceny / zoom, ceny + ceny / zoom) if textcolor != 'notext': if textcolor is None: # determine average image color and set textcolor to opposite # (this is a bit hacky and there is _definitely_ a better way # to do this) avgcolor = im.cmap(im.norm(im.get_array())).mean(axis=(0,1)) totalcolor = avgcolor[:3].sum() if totalcolor > 0.5: textcolor = 'w' else: textcolor = 'k' ax.tick_params(color=textcolor) ax.set_title(("{0:." + str(decimals) + "f}").format(spectral_axis[channel]), x=textxloc, y=textyloc, color=textcolor) # only label bottom-left panel with locations if (ichannel != nx*(ny-1)): ax.coords[0].set_ticklabel_position('') ax.coords[1].set_ticklabel_position('') ax.tick_params(direction='in') axis_list.append(ax) if tighten: plt.tight_layout() if output_file is not None: fig.savefig(output_file, **savefig_kwargs) return axis_list class SpectralCube(BaseSpectralCube, BeamMixinClass): __name__ = "SpectralCube" _oned_spectrum = OneDSpectrum def __new__(cls, *args, **kwargs): if kwargs.pop('use_dask', False): from .dask_spectral_cube import DaskSpectralCube return super().__new__(DaskSpectralCube) else: return super().__new__(cls) def __init__(self, data, wcs, mask=None, meta=None, fill_value=np.nan, header=None, allow_huge_operations=False, beam=None, wcs_tolerance=0.0, use_dask=False, **kwargs): super(SpectralCube, self).__init__(data=data, wcs=wcs, mask=mask, meta=meta, fill_value=fill_value, header=header, allow_huge_operations=allow_huge_operations, wcs_tolerance=wcs_tolerance, **kwargs) # Beam loading must happen *after* WCS is read if beam is None: beam = cube_utils.try_load_beam(self.header) else: if not isinstance(beam, Beam): raise TypeError("beam must be a radio_beam.Beam object.") # Allow setting the beam attribute even if there is no beam defined # Accessing `SpectralCube.beam` without a beam defined raises a # `NoBeamError` with an informative message. self.beam = beam if beam is not None: self._meta['beam'] = beam self._header.update(beam.to_header_keywords()) def _new_cube_with(self, **kwargs): beam = kwargs.pop('beam', None) if 'beam' in self._meta and beam is None: beam = self._beam newcube = super(SpectralCube, self)._new_cube_with(beam=beam, **kwargs) return newcube _new_cube_with.__doc__ = BaseSpectralCube._new_cube_with.__doc__ def with_beam(self, beam, raise_error_jybm=True): ''' Attach a beam object to the `~SpectralCube`. Parameters ---------- beam : `~radio_beam.Beam` `Beam` object defining the resolution element of the `~SpectralCube`. ''' if not isinstance(beam, Beam): raise TypeError("beam must be a radio_beam.Beam object.") self.check_jybeam_smoothing(raise_error_jybm=raise_error_jybm) meta = self._meta.copy() meta['beam'] = beam header = self._header.copy() header.update(beam.to_header_keywords()) newcube = self._new_cube_with(meta=self.meta, beam=beam) return newcube class VaryingResolutionSpectralCube(BaseSpectralCube, MultiBeamMixinClass): """ A variant of the SpectralCube class that has PSF (beam) information on a per-channel basis. """ __name__ = "VaryingResolutionSpectralCube" _oned_spectrum = VaryingResolutionOneDSpectrum def __new__(cls, *args, **kwargs): if kwargs.pop('use_dask', False): from .dask_spectral_cube import DaskVaryingResolutionSpectralCube return super().__new__(DaskVaryingResolutionSpectralCube) else: return super().__new__(cls) def __init__(self, *args, major_unit=u.arcsec, minor_unit=u.arcsec, **kwargs): """ Create a SpectralCube with an associated beam table. The new VaryingResolutionSpectralCube will have a ``beams`` attribute and a ``beam_threshold`` attribute as described below. It will perform some additional checks when trying to perform analysis across image frames. Three new keyword arguments are accepted: Other Parameters ---------------- beam_table : `numpy.recarray` A table of beam major and minor axes in arcseconds and position angles, with labels BMAJ, BMIN, BPA beams : list A list of `radio_beam.Beam` objects beam_threshold : float or dict The fractional threshold above which beams are considered different. A dictionary may be used with entries 'area', 'major', 'minor', 'pa' so that you can specify a different fractional threshold for each of these. For example, if you want to check only that the areas are the same, and not worry about the shape (which might be a bad idea...), you could set ``beam_threshold={'area':0.01, 'major':1.5, 'minor':1.5, 'pa':5.0}`` """ # these types of cube are undefined without the radio_beam package beam_table = kwargs.pop('beam_table', None) beams = kwargs.pop('beams', None) beam_threshold = kwargs.pop('beam_threshold', 0.01) if (beam_table is None and beams is None): raise ValueError( "Must give either a beam table or a list of beams to " "initialize a VaryingResolutionSpectralCube") super(VaryingResolutionSpectralCube, self).__init__(*args, **kwargs) if isinstance(beam_table, BinTableHDU): beam_data_table = beam_table.data else: beam_data_table = beam_table if beam_table is not None: # CASA beam tables are in arcsec, and that's what we support beams = Beams(major=u.Quantity(beam_data_table['BMAJ'], major_unit), minor=u.Quantity(beam_data_table['BMIN'], minor_unit), pa=u.Quantity(beam_data_table['BPA'], u.deg), meta=[{key: row[key] for key in beam_data_table.names if key not in ('BMAJ','BPA', 'BMIN')} for row in beam_data_table], ) goodbeams = beams.isfinite # track which, if any, beams are masked for later use self.goodbeams_mask = goodbeams if not all(goodbeams): warnings.warn("There were {0} non-finite beams; layers with " "non-finite beams will be masked out.".format( np.count_nonzero(np.logical_not(goodbeams))), NonFiniteBeamsWarning ) beam_mask = BooleanArrayMask(goodbeams[:,None,None], wcs=self._wcs, shape=self.shape, ) if not is_broadcastable_and_smaller(beam_mask.shape, self._data.shape): # this should never be allowed to happen raise ValueError("Beam mask shape is not broadcastable to data shape: " "%s vs %s" % (beam_mask.shape, self._data.shape)) assert beam_mask.shape == self.shape new_mask = np.bitwise_and(self._mask, beam_mask) new_mask._validate_wcs(new_data=self._data, new_wcs=self._wcs) self._mask = new_mask if (len(beams) != self.shape[0]): raise ValueError("Beam list must have same size as spectral " "dimension") self.beams = beams self.beam_threshold = beam_threshold def __getitem__(self, view): # Need to allow self[:], self[:,:] if isinstance(view, (slice,int,np.int64)): view = (view, slice(None), slice(None)) elif len(view) == 2: view = view + (slice(None),) elif len(view) > 3: raise IndexError("Too many indices") meta = {} meta.update(self._meta) slice_data = [(s.start, s.stop, s.step) if hasattr(s,'start') else s for s in view] if 'slice' in meta: meta['slice'].append(slice_data) else: meta['slice'] = [slice_data] # intslices identifies the slices that are given by integers, i.e. # indices. Other slices are slice objects, e.g. obj[5:10], and have # 'start' attributes. intslices = [2-ii for ii,s in enumerate(view) if not hasattr(s,'start')] # for beams, we care only about the first slice, independent of its # type specslice = view[0] if intslices: if len(intslices) > 1: if 2 in intslices: raise NotImplementedError("1D slices along non-spectral " "axes are not yet implemented.") newwcs = self._wcs.sub([a for a in (1,2,3) if a not in [x+1 for x in intslices]]) if cube_utils._has_beam(self): bmarg = {'beam': self.beam} elif cube_utils._has_beams(self): bmarg = {'beams': self.unmasked_beams[specslice]} else: bmarg = {} return self._oned_spectrum(value=self._data[view], wcs=newwcs, copy=False, unit=self.unit, spectral_unit=self._spectral_unit, mask=self.mask[view], meta=meta, goodbeams_mask=self.goodbeams_mask[specslice] if hasattr(self, '_goodbeams_mask') else None, **bmarg ) # only one element, so drop an axis newwcs = wcs_utils.drop_axis(self._wcs, intslices[0]) header = self._nowcs_header # Slice objects know how to parse Beam objects stored in the # metadata # A 2D slice with a VRSC should not be allowed along a # position-spectral axis if not isinstance(self.unmasked_beams[specslice], Beam): raise AttributeError("2D slices along a spectral axis are not " "allowed for " "VaryingResolutionSpectralCubes. Convolve" " to a common resolution with " "`convolve_to` before attempting " "position-spectral slicing.") meta['beam'] = self.unmasked_beams[specslice] return Slice(value=self.filled_data[view], wcs=newwcs, copy=False, unit=self.unit, header=header, meta=meta) newmask = self._mask[view] if self._mask is not None else None newwcs = wcs_utils.slice_wcs(self._wcs, view, shape=self.shape) newwcs._naxis = list(self.shape) # this is an assertion to ensure that the WCS produced is valid # (this is basically a regression test for #442) assert newwcs[:, slice(None), slice(None)] assert len(newwcs._naxis) == 3 return self._new_cube_with(data=self._data[view], wcs=newwcs, mask=newmask, beams=self.unmasked_beams[specslice], meta=meta) def spectral_slab(self, lo, hi): """ Extract a new cube between two spectral coordinates Parameters ---------- lo, hi : :class:`~astropy.units.Quantity` The lower and upper spectral coordinate for the slab range. The units should be compatible with the units of the spectral axis. If the spectral axis is in frequency-equivalent units and you want to select a range in velocity, or vice-versa, you should first use :meth:`~spectral_cube.SpectralCube.with_spectral_unit` to convert the units of the spectral axis. """ # Find range of values for spectral axis ilo = self.closest_spectral_channel(lo) ihi = self.closest_spectral_channel(hi) if ilo > ihi: ilo, ihi = ihi, ilo ihi += 1 # Create WCS slab wcs_slab = self._wcs.deepcopy() wcs_slab.wcs.crpix[2] -= ilo # Create mask slab if self._mask is None: mask_slab = None else: try: mask_slab = self._mask[ilo:ihi, :, :] except NotImplementedError: warnings.warn("Mask slicing not implemented for " "{0} - dropping mask". format(self._mask.__class__.__name__), NotImplementedWarning ) mask_slab = None # Create new spectral cube slab = self._new_cube_with(data=self._data[ilo:ihi], wcs=wcs_slab, beams=self.unmasked_beams[ilo:ihi], mask=mask_slab) return slab def _new_cube_with(self, goodbeams_mask=None, **kwargs): beams = kwargs.pop('beams', self.unmasked_beams) beam_threshold = kwargs.pop('beam_threshold', self.beam_threshold) VRSC = VaryingResolutionSpectralCube newcube = super(VRSC, self)._new_cube_with(beams=beams, beam_threshold=beam_threshold, **kwargs) if goodbeams_mask is not None: newcube.goodbeams_mask = goodbeams_mask assert hasattr(newcube, '_goodbeams_mask') else: newcube.goodbeams_mask = np.isfinite(newcube.beams) assert hasattr(newcube, '_goodbeams_mask') return newcube _new_cube_with.__doc__ = BaseSpectralCube._new_cube_with.__doc__ def _check_beam_areas(self, threshold, mean_beam, mask=None): """ Check that the beam areas are the same to within some threshold """ if mask is not None: assert len(mask) == len(self.unmasked_beams) mask = np.array(mask, dtype='bool') else: mask = np.ones(len(self.unmasked_beams), dtype='bool') qtys = dict(sr=self.unmasked_beams.sr, major=self.unmasked_beams.major.to(u.deg), minor=self.unmasked_beams.minor.to(u.deg), # position angles are not really comparable #pa=u.Quantity([bm.pa for bm in self.unmasked_beams], u.deg), ) errormessage = "" for (qtyname, qty) in (qtys.items()): minv = qty[mask].min() maxv = qty[mask].max() mn = getattr(mean_beam, qtyname) maxdiff = (np.max(np.abs(u.Quantity((maxv-mn, minv-mn))))/mn).decompose() if isinstance(threshold, dict): th = threshold[qtyname] else: th = threshold if maxdiff > th: errormessage += ("Beam {2}s differ by up to {0}x, which is greater" " than the threshold {1}\n".format(maxdiff, threshold, qtyname )) if errormessage != "": raise ValueError(errormessage) def __getattribute__(self, attrname): """ For any functions that operate over the spectral axis, perform beam sameness checks before performing the operation to avoid unexpected results """ # short name to avoid long lines below VRSC = VaryingResolutionSpectralCube # what about apply_numpy_function, apply_function? since they're # called by some of these, maybe *only* those should be wrapped to # avoid redundant calls if attrname in ('moment', 'apply_numpy_function', 'apply_function', 'apply_function_parallel_spectral'): origfunc = super(VRSC, self).__getattribute__(attrname) return self._handle_beam_areas_wrapper(origfunc) else: return super(VRSC, self).__getattribute__(attrname) @property def header(self): header = super(VaryingResolutionSpectralCube, self).header # this indicates to CASA that there is a beam table header['CASAMBM'] = True return header @property def hdu(self): raise ValueError("For VaryingResolutionSpectralCube's, use hdulist " "instead of hdu.") @property def hdulist(self): """ HDUList version of self """ hdu = PrimaryHDU(self.filled_data[:].value, header=self.header) from .cube_utils import beams_to_bintable # use unmasked beams because, even if the beam is masked out, we should # write it bmhdu = beams_to_bintable(self.unmasked_beams) return HDUList([hdu, bmhdu]) @warn_slow def convolve_to(self, beam, allow_smaller=False, convolve=convolution.convolve_fft, update_function=None, **kwargs): """ Convolve each channel in the cube to a specified beam .. warning:: The current implementation of ``convolve_to`` creates an in-memory copy of the whole cube to store the convolved data. Issue #506 notes that this is a problem, and it is on our to-do list to fix. .. warning:: Note that if there is any misaligment between the cube's spatial pixel axes and the WCS's spatial axes *and* the beams are not round, the convolution kernels used here may be incorrect. Be wary in such cases! Parameters ---------- beam : `radio_beam.Beam` The beam to convolve to allow_smaller : bool If the specified target beam is smaller than the beam in a channel in any dimension and this is ``False``, it will raise an exception. convolve : function The astropy convolution function to use, either `astropy.convolution.convolve` or `astropy.convolution.convolve_fft` update_function : method Method that is called to update an external progressbar If provided, it disables the default `astropy.utils.console.ProgressBar` kwargs : dict Keyword arguments to pass to the convolution function Returns ------- cube : `SpectralCube` A SpectralCube with a single ``beam`` """ if ((self.wcs.celestial.wcs.get_pc()[0,1] != 0 or self.wcs.celestial.wcs.get_pc()[1,0] != 0)): warnings.warn("The beams will produce convolution kernels " "that are not aware of any misaligment " "between pixel and world coordinates, " "and there are off-diagonal elements of the " "WCS spatial transformation matrix. " "Unexpected results are likely.", BeamWarning ) pixscale = wcs.utils.proj_plane_pixel_area(self.wcs.celestial)**0.5*u.deg convolution_kernels = [] beam_ratio_factors = [] for bm,valid in zip(self.unmasked_beams, self.goodbeams_mask): if not valid: # just skip masked-out beams convolution_kernels.append(None) beam_ratio_factors.append(1.) continue elif beam == bm: # Point response when beams are equal, don't convolve. convolution_kernels.append(None) beam_ratio_factors.append(1.) continue try: cb = beam.deconvolve(bm) ck = cb.as_kernel(pixscale) convolution_kernels.append(ck) beam_ratio_factors.append((beam.sr / bm.sr)) except ValueError: if allow_smaller: convolution_kernels.append(None) beam_ratio_factors.append(1.) else: raise # Only use the beam ratios when convolving in Jy/beam if not self.unit.is_equivalent(u.Jy / u.beam): beam_ratio_factors = [1.] * len(convolution_kernels) if update_function is None: pb = ProgressBar(self.shape[0]) update_function = pb.update newdata = np.empty(self.shape) for ii,kernel in enumerate(convolution_kernels): # load each image from a slice to avoid loading whole cube into # memory img = self[ii,:,:].filled_data[:] # Kernel can only be None when `allow_smaller` is True, # or if the beams are equal. Only the latter is really valid. if kernel is None: newdata[ii, :, :] = img else: # See #631: kwargs get passed within self.apply_function_parallel_spatial newdata[ii, :, :] = convolve(img, kernel, normalize_kernel=True, **kwargs) * beam_ratio_factors[ii] update_function() newcube = SpectralCube(data=newdata, wcs=self.wcs, mask=self.mask, meta=self.meta, fill_value=self.fill_value, header=self.header, allow_huge_operations=self.allow_huge_operations, beam=beam, wcs_tolerance=self._wcs_tolerance) return newcube @warn_slow def to(self, unit, equivalencies=()): """ Return the cube converted to the given unit (assuming it is equivalent). If conversion was required, this will be a copy, otherwise it will """ if not isinstance(unit, u.Unit): unit = u.Unit(unit) if unit == self.unit: # No copying return self # Create the tuple of unit conversions needed. factor = cube_utils.bunit_converters(self, unit, equivalencies=equivalencies) factor = np.array(factor) # special case: array in equivalencies # (I don't think this should have to be special cased, but I don't know # how to manipulate broadcasting rules any other way) if hasattr(factor, '__len__') and len(factor) == len(self): return self._new_cube_with(data=self._data*factor[:,None,None], unit=unit) else: return self._new_cube_with(data=self._data*factor, unit=unit) def mask_channels(self, goodchannels): """ Helper function to mask out channels. This function is equivalent to adding a mask with ``cube[view]`` where ``view`` is broadcastable to the cube shape, but it accepts 1D arrays that are not normally broadcastable. Additionally, for `VaryingResolutionSpectralCube` s, the beams in the bad channels will not be checked when averaging, convolving, and doing other operations that are multibeam-aware. Parameters ---------- goodchannels : array A 1D boolean array declaring which channels should be kept. Returns ------- cube : `SpectralCube` A cube with the specified channels masked """ goodchannels = np.asarray(goodchannels, dtype='bool') if goodchannels.ndim != 1: raise ValueError("goodchannels mask must be one-dimensional") if goodchannels.size != self.shape[0]: raise ValueError("goodchannels must have a length equal to the " "cube's spectral dimension.") cube = self.with_mask(goodchannels[:,None,None]) cube.goodbeams_mask = np.logical_and(goodchannels, self.goodbeams_mask) return cube def spectral_interpolate(self, *args, **kwargs): raise AttributeError("VaryingResolutionSpectralCubes can't be " "spectrally interpolated. Convolve to a " "common resolution with `convolve_to` before " "attempting spectral interpolation.") def spectral_smooth(self, *args, **kwargs): raise AttributeError("VaryingResolutionSpectralCubes can't be " "spectrally smoothed. Convolve to a " "common resolution with `convolve_to` before " "attempting spectral smoothed.") def _regionlist_to_single_region(region_list): """ Recursively merge a region list into a single compound region """ import regions if len(region_list) == 1: return region_list[0] left = _regionlist_to_single_region(region_list[:len(region_list)//2]) right = _regionlist_to_single_region(region_list[len(region_list)//2:]) return regions.CompoundPixelRegion(left, right, operator.or_) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1633017852.0 spectral-cube-0.6.0/spectral_cube/stokes_spectral_cube.py0000644000175100001710000001340500000000000023201 0ustar00runnerdockerfrom __future__ import print_function, absolute_import, division import six import numpy as np from astropy.io.registry import UnifiedReadWriteMethod from .io.core import StokesSpectralCubeRead, StokesSpectralCubeWrite from .spectral_cube import SpectralCube, BaseSpectralCube from . import wcs_utils from .masks import BooleanArrayMask, is_broadcastable_and_smaller __all__ = ['StokesSpectalCube'] VALID_STOKES = ['I', 'Q', 'U', 'V', 'RR', 'LL', 'RL', 'LR', 'XX', 'XY', 'YX', 'YY', 'RX', 'RY', 'LX', 'LY', 'XR,', 'XL', 'YR', 'YL', 'PP', 'PQ', 'QP', 'QQ', 'RCircular', 'LCircular', 'Linear', 'Ptotal', 'Plinear', 'PFtotal', 'PFlinear', 'Pangle'] class StokesSpectralCube(object): """ A class to store a spectral cube with multiple Stokes parameters. The individual Stokes cubes can share a common mask in addition to having component-specific masks. """ def __init__(self, stokes_data, mask=None, meta=None, fill_value=None): self._stokes_data = stokes_data self._meta = meta or {} self._fill_value = fill_value reference = tuple(stokes_data.keys())[0] for component in stokes_data: if not isinstance(stokes_data[component], BaseSpectralCube): raise TypeError("stokes_data should be a dictionary of " "SpectralCube objects") if not wcs_utils.check_equality(stokes_data[component].wcs, stokes_data[reference].wcs): raise ValueError("All spectral cubes in stokes_data " "should have the same WCS") if component not in VALID_STOKES: raise ValueError("Invalid Stokes component: {0} - should be " "one of I, Q, U, V, RR, LL, RL, LR, XX, XY, YX, YY, RX, RY, LX, LY, XR, XL, YR, YL, PP, PQ, QP, QQ, RCircular, LCircular, Linear, Ptotal, Plinear, PFtotal, PFlinear, Pangle".format(component)) if stokes_data[component].shape != stokes_data[reference].shape: raise ValueError("All spectral cubes should have the same shape") self._wcs = stokes_data[reference].wcs self._shape = stokes_data[reference].shape if isinstance(mask, BooleanArrayMask): if not is_broadcastable_and_smaller(mask.shape, self._shape): raise ValueError("Mask shape is not broadcastable to data shape:" " {0} vs {1}".format(mask.shape, self._shape)) self._mask = mask @property def shape(self): return self._shape @property def mask(self): """ The underlying mask """ return self._mask @property def wcs(self): return self._wcs def __dir__(self): if six.PY2: return self.components + dir(type(self)) + list(self.__dict__) else: return self.components + super(StokesSpectralCube, self).__dir__() @property def components(self): return list(self._stokes_data.keys()) def __getattr__(self, attribute): """ Descriptor to return the Stokes cubes """ if attribute in self._stokes_data: if self.mask is not None: return self._stokes_data[attribute].with_mask(self.mask) else: return self._stokes_data[attribute] else: raise AttributeError("StokesSpectralCube has no attribute {0}".format(attribute)) def with_mask(self, mask, inherit_mask=True): """ Return a new StokesSpectralCube instance that contains a composite mask of the current StokesSpectralCube and the new ``mask``. Parameters ---------- mask : :class:`MaskBase` instance, or boolean numpy array The mask to apply. If a boolean array is supplied, it will be converted into a mask, assuming that `True` values indicate included elements. inherit_mask : bool (optional, default=True) If True, combines the provided mask with the mask currently attached to the cube Returns ------- new_cube : :class:`StokesSpectralCube` A cube with the new mask applied. Notes ----- This operation returns a view into the data, and not a copy. """ if isinstance(mask, np.ndarray): if not is_broadcastable_and_smaller(mask.shape, self.shape): raise ValueError("Mask shape is not broadcastable to data shape: " "%s vs %s" % (mask.shape, self.shape)) mask = BooleanArrayMask(mask, self.wcs) if self._mask is not None: return self._new_cube_with(mask=self.mask & mask if inherit_mask else mask) else: return self._new_cube_with(mask=mask) def _new_cube_with(self, stokes_data=None, mask=None, meta=None, fill_value=None): data = self._stokes_data if stokes_data is None else stokes_data mask = self._mask if mask is None else mask if meta is None: meta = {} meta.update(self._meta) fill_value = self._fill_value if fill_value is None else fill_value cube = StokesSpectralCube(stokes_data=data, mask=mask, meta=meta, fill_value=fill_value) return cube def with_spectral_unit(self, unit, **kwargs): stokes_data = {k: self._stokes_data[k].with_spectral_unit(unit, **kwargs) for k in self._stokes_data} return self._new_cube_with(stokes_data=stokes_data) read = UnifiedReadWriteMethod(StokesSpectralCubeRead) write = UnifiedReadWriteMethod(StokesSpectralCubeWrite) ././@PaxHeader0000000000000000000000000000003200000000000010210 xustar0026 mtime=1633017864.73703 spectral-cube-0.6.0/spectral_cube/tests/0000755000175100001710000000000000000000000017563 5ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1633017852.0 spectral-cube-0.6.0/spectral_cube/tests/__init__.py0000644000175100001710000000024700000000000021677 0ustar00runnerdockerfrom __future__ import print_function, absolute_import, division import os def path(filename): return os.path.join(os.path.dirname(__file__), 'data', filename) ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1633017864.7410302 spectral-cube-0.6.0/spectral_cube/tests/data/0000755000175100001710000000000000000000000020474 5ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1633017852.0 spectral-cube-0.6.0/spectral_cube/tests/data/255-fk5.reg0000644000175100001710000000035100000000000022170 0ustar00runnerdocker# Region file format: DS9 version 4.1 global color=green dashlist=8 3 width=1 font="helvetica 10 normal roman" select=1 highlite=1 dash=0 fixed=0 edit=1 move=1 delete=1 include=1 source=1 fk5 circle(1:36:14.969,+29:56:07.68,2.6509") ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1633017852.0 spectral-cube-0.6.0/spectral_cube/tests/data/255-pixel.reg0000644000175100001710000000035000000000000022623 0ustar00runnerdocker# Region file format: DS9 version 4.1 global color=green dashlist=8 3 width=1 font="helvetica 10 normal roman" select=1 highlite=1 dash=0 fixed=0 edit=1 move=1 delete=1 include=1 source=1 image circle(2.5282832,3.4612342,1.3254484) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1633017852.0 spectral-cube-0.6.0/spectral_cube/tests/data/Makefile0000644000175100001710000000020000000000000022124 0ustar00runnerdockerall: advs.fits dvsa.fits vsad.fits sadv.fits sdav.fits adv.fits vad.fits %.fits: make_test_cubes.py python make_test_cubes.py ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1633017852.0 spectral-cube-0.6.0/spectral_cube/tests/data/__init__.py0000644000175100001710000000000000000000000022573 0ustar00runnerdocker././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1633017864.7410302 spectral-cube-0.6.0/spectral_cube/tests/data/basic.image/0000755000175100001710000000000000000000000022636 5ustar00runnerdocker././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1633017864.7410302 spectral-cube-0.6.0/spectral_cube/tests/data/basic.image/logtable/0000755000175100001710000000000000000000000024427 5ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1633017852.0 spectral-cube-0.6.0/spectral_cube/tests/data/basic.image/logtable/table.dat0000644000175100001710000000256000000000000026213 0ustar00runnerdocker¾¾¾¾lTable PlainTable< TableDescLog message table5 TableRecord RecordDesc5 TableRecord RecordDescScalarColumnDesc PagedArray% Array% ArrayArrayColumnDesc TableRecord³ RecordDesctype refer m2 RecordDescm1 RecordDescm0 RecordDescpositionITRFc TableRecord; RecordDescvalueunit AXP2ú…£me TableRecord; RecordDescunit valuerad?âïß«ee TableRecord; RecordDescunit valuerad¿þ ~G¯“K® TableRecordÓ RecordDesc system projection projection_parameters IPositionÿÿÿÿcrval IPositionÿÿÿÿcrpix IPositionÿÿÿÿcdelt IPositionÿÿÿÿpc IPositionÿÿÿÿaxes IPositionÿÿÿÿunits IPositionÿÿÿÿconversionSystem longpolelatpoleJ2000SIN5 Array5 Array?Ùå€Ù÷Ûï?áÀhËÿ¥5 Array@•p@‘ü5 Array¾äU¥¶–b>äU¥¶–bI Array?ð?ðG ArrayRight Ascension Declination3 ArrayradradJ2000@f€@>“—S]éù* Array5 Array?Úá®*^Á?à·åæ¡â6* Array5 Array\ TableRecord+ RecordDescaxes IPositionÿÿÿÿstokes IPositionÿÿÿÿcrval IPositionÿÿÿÿcrpix IPositionÿÿÿÿcdelt IPositionÿÿÿÿpc IPositionÿÿÿÿ/ ArrayStokes/ ArrayIQ- Array?ð- Array- Array?ð1 Array?ð& Array- Array?ð& Array- Array ‹ TableRecordh RecordDesc versionsystem restfreq restfreqs IPositionÿÿÿÿvelType nativeTypevelUnit waveUnit formatUnit tabular RecordDescunit name conversion RecordDescBARYAÕ*jõš=q- ArrayAÕ*jõš=qkm/smm< TableRecord’ RecordDesccrval IPositionÿÿÿÿcrpix IPositionÿÿÿÿcdelt IPositionÿÿÿÿpc IPositionÿÿÿÿaxes IPositionÿÿÿÿunits IPositionÿÿÿÿ pixelvalues IPositionÿÿÿÿ worldvalues IPositionÿÿÿÿ- ArrayAÕ0:É”Ñþ- Array- ArrayÀ·ä.ð1 Array?ð2 Array Frequency+ ArrayHz= Array?ð@= ArrayAÕ0:É”ÑþAÕ04Ðo-AÕ0.×Lº†Hz Frequency^ TableRecord´ RecordDesc direction RecordDescposition RecordDescepoch RecordDescsystem † TableRecord‹ RecordDesctype refer m1 RecordDescm0 RecordDesc directionJ2000e TableRecord; RecordDescvalueunit ?ù!ûTD-rade TableRecord; RecordDescunit valuerad TableRecord³ RecordDesctype refer m2 RecordDescm1 RecordDescm0 RecordDescpositionITRFc TableRecord; RecordDescvalueunit me TableRecord; RecordDescunit valuerade TableRecord; RecordDescunit valueradò TableRecordc RecordDesctype refer m0 RecordDescepochLASTc TableRecord; RecordDescvalueunit dBARY& Array- ArrayAÕ0:É”Ñþ& Array- ArrayJy/beamT TableRecordÖ RecordDesc cellscal object vobsltype lstartlwidthlstepbtype bmajbminbpaCONSTANTM33À9¹Åì& channel@jà?ð?ð intensity?24Vxš¼ß?24Vxš¼ßš TableRecordx RecordDesc restoringbeam RecordDesc imagetype objectname ó TableRecord£ RecordDescmajor RecordDescminor RecordDesc positionangle RecordDesch TableRecord; RecordDescvalueunit ?ðarcsech TableRecord; RecordDescunit valuearcsec?ðe TableRecord; RecordDescunit valuedeg IntensityM33¯ TableRecordO RecordDescHypercolumn_map RecordDescE TableRecord° RecordDescndimdata IPositionÿÿÿÿcoord IPositionÿÿÿÿid IPositionÿÿÿÿ, Arraymap% Array% ArrayArrayColumnDesc PagedArray% Array% ArrayArrayColumnDesc TableRecord³ RecordDesctype refer m2 RecordDescm1 RecordDescm0 RecordDescpositionITRFc TableRecord; RecordDescvalueunit AXP2ú…£me TableRecord; RecordDescunit valuerad?âïß«ee TableRecord; RecordDescunit valuerad¿þ ~G¯“K® TableRecordÓ RecordDesc system projection projection_parameters IPositionÿÿÿÿcrval IPositionÿÿÿÿcrpix IPositionÿÿÿÿcdelt IPositionÿÿÿÿpc IPositionÿÿÿÿaxes IPositionÿÿÿÿunits IPositionÿÿÿÿconversionSystem longpolelatpoleJ2000SIN5 Array5 Array?Ùå€Ù÷Ûï?áÀhËÿ¥5 Array@•p@‘ü5 Array¾äU¥¶–b>äU¥¶–bI Array?ð?ðG ArrayRight Ascension Declination3 ArrayradradJ2000@f€@>“—S]éù* Array5 Array?Úá®*^Á?à·åæ¡â6* Array5 Array\ TableRecord+ RecordDescaxes IPositionÿÿÿÿstokes IPositionÿÿÿÿcrval IPositionÿÿÿÿcrpix IPositionÿÿÿÿcdelt IPositionÿÿÿÿpc IPositionÿÿÿÿ/ ArrayStokes/ ArrayIQ- Array?ð- Array- Array?ð1 Array?ð& Array- Array?ð& Array- Array ‹ TableRecordh RecordDesc versionsystem restfreq restfreqs IPositionÿÿÿÿvelType nativeTypevelUnit waveUnit formatUnit tabular RecordDescunit name conversion RecordDescBARYAÕ*jõš=q- ArrayAÕ*jõš=qkm/smm< TableRecord’ RecordDesccrval IPositionÿÿÿÿcrpix IPositionÿÿÿÿcdelt IPositionÿÿÿÿpc IPositionÿÿÿÿaxes IPositionÿÿÿÿunits IPositionÿÿÿÿ pixelvalues IPositionÿÿÿÿ worldvalues IPositionÿÿÿÿ- ArrayAÕ0:É”Ñþ- Array- ArrayÀ·ä.ð1 Array?ð2 Array Frequency+ ArrayHz= Array?ð@= ArrayAÕ0:É”ÑþAÕ04Ðo-AÕ0.×Lº†Hz Frequency^ TableRecord´ RecordDesc direction RecordDescposition RecordDescepoch RecordDescsystem † TableRecord‹ RecordDesctype refer m1 RecordDescm0 RecordDesc directionJ2000e TableRecord; RecordDescvalueunit ?ù!ûTD-rade TableRecord; RecordDescunit valuerad TableRecord³ RecordDesctype refer m2 RecordDescm1 RecordDescm0 RecordDescpositionITRFc TableRecord; RecordDescvalueunit me TableRecord; RecordDescunit valuerade TableRecord; RecordDescunit valueradò TableRecordc RecordDesctype refer m0 RecordDescepochLASTc TableRecord; RecordDescvalueunit dBARY& Array- ArrayAÕ0:É”Ñþ& Array- ArrayJy/beamT TableRecordÖ RecordDesc cellscal object vobsltype lstartlwidthlstepbtype bmajbminbpaCONSTANTM33À9¹Åì& channel@jà?ð?ð intensity?24Vxš¼ß?24Vxš¼ßš TableRecordx RecordDesc restoringbeam RecordDesc imagetype objectname ó TableRecord£ RecordDescmajor RecordDescminor RecordDesc positionangle RecordDesch TableRecord; RecordDescvalueunit ?ðarcsech TableRecord; RecordDescunit valuearcsec?ðe TableRecord; RecordDescunit valuedeg IntensityM33¯ TableRecordO RecordDescHypercolumn_map RecordDescE TableRecord° RecordDescndimdata IPositionÿÿÿÿcoord IPositionÿÿÿÿid IPositionÿÿÿÿ, Arraymap% Array% ArrayArrayColumnDesc3hS@@ ÝŠº¿ð?ð?ð?Ø+x( Ñf¼_¨ž<HJy/beam RA DEC VELOCITY EQUATORIAL 0IRAS2A ‰i·í+í?lÛ%"sá?Cî¶²@ÃxT׿úD$ëöÕ=ï+í?lÛ%"sá?0HDO ´?*\ÂE“ AéVÔ½à@ eË6¡Ì 6®Å>™§<X3ETd0ETd¯'4Py‡»' •»õ†½»KЇ»Íj˜»€Ñ»vÕ¡»vˆ¼»Wí¼4³»ëÚ»wü¼_¨ž<Žé›<Ìæ<›•ž<*çš< øŠ<€£‹<»†<˜šj<¤cj TableRecord³ RecordDesctype refer m2 RecordDescm1 RecordDescm0 RecordDescpositionITRFc TableRecord; RecordDescvalueunit AXP2ú…£me TableRecord; RecordDescunit valuerad?âïß«ee TableRecord; RecordDescunit valuerad¿þ ~G¯“K® TableRecordÓ RecordDesc system projection projection_parameters IPositionÿÿÿÿcrval IPositionÿÿÿÿcrpix IPositionÿÿÿÿcdelt IPositionÿÿÿÿpc IPositionÿÿÿÿaxes IPositionÿÿÿÿunits IPositionÿÿÿÿconversionSystem longpolelatpoleJ2000SIN5 Array5 Array?Ùå€Ù÷Ûï?áÀhËÿ¥5 Array@•p@‘ü5 Array¾äU¥¶–b>äU¥¶–bI Array?ð?ðG ArrayRight Ascension Declination3 ArrayradradJ2000@f€@>“—S]éù* Array5 Array?Úá®*^Á?à·åæ¡â6* Array5 Array\ TableRecord+ RecordDescaxes IPositionÿÿÿÿstokes IPositionÿÿÿÿcrval IPositionÿÿÿÿcrpix IPositionÿÿÿÿcdelt IPositionÿÿÿÿpc IPositionÿÿÿÿ/ ArrayStokes/ ArrayIQ- Array?ð- Array- Array?ð1 Array?ð& Array- Array?ð& Array- Array ‹ TableRecordh RecordDesc versionsystem restfreq restfreqs IPositionÿÿÿÿvelType nativeTypevelUnit waveUnit formatUnit tabular RecordDescunit name conversion RecordDescBARYAÕ*jõš=q- ArrayAÕ*jõš=qkm/smm< TableRecord’ RecordDesccrval IPositionÿÿÿÿcrpix IPositionÿÿÿÿcdelt IPositionÿÿÿÿpc IPositionÿÿÿÿaxes IPositionÿÿÿÿunits IPositionÿÿÿÿ pixelvalues IPositionÿÿÿÿ worldvalues IPositionÿÿÿÿ- ArrayAÕ0:É”Ñþ- Array- ArrayÀ·ä.ð1 Array?ð2 Array Frequency+ ArrayHz= Array?ð@= ArrayAÕ0:É”ÑþAÕ04Ðo-AÕ0.×Lº†Hz Frequency^ TableRecord´ RecordDesc direction RecordDescposition RecordDescepoch RecordDescsystem † TableRecord‹ RecordDesctype refer m1 RecordDescm0 RecordDesc directionJ2000e TableRecord; RecordDescvalueunit ?ù!ûTD-rade TableRecord; RecordDescunit valuerad TableRecord³ RecordDesctype refer m2 RecordDescm1 RecordDescm0 RecordDescpositionITRFc TableRecord; RecordDescvalueunit me TableRecord; RecordDescunit valuerade TableRecord; RecordDescunit valueradò TableRecordc RecordDesctype refer m0 RecordDescepochLASTc TableRecord; RecordDescvalueunit dBARY& Array- ArrayAÕ0:É”Ñþ& Array- ArrayJy/beamT TableRecordÖ RecordDesc cellscal object vobsltype lstartlwidthlstepbtype bmajbminbpaCONSTANTM33À9¹Åì& channel@jà?ð?ð intensity?24Vxš¼ß?24Vxš¼ßš TableRecordx RecordDesc restoringbeam RecordDesc imagetype objectname ó TableRecord£ RecordDescmajor RecordDescminor RecordDesc positionangle RecordDesch TableRecord; RecordDescvalueunit ?ðarcsech TableRecord; RecordDescunit valuearcsec?ðe TableRecord; RecordDescunit valuedeg IntensityM33¯ TableRecordO RecordDescHypercolumn_map RecordDescE TableRecord° RecordDescndimdata IPositionÿÿÿÿcoord IPositionÿÿÿÿid IPositionÿÿÿÿ, Arraymap% Array% ArrayArrayColumnDesc 2, data=data, wcs=wcs) assert_allclose(m.include(data, wcs), [[[0, 0, 0, 1, 1]]]) assert_allclose(m.exclude(data, wcs), [[[1, 1, 1, 0, 0]]]) assert_allclose(m._filled(data, wcs), [[[np.nan, np.nan, np.nan, 3, 4]]]) assert_allclose(m._flattened(data, wcs), [3, 4]) assert_allclose(m.include(data, wcs, view=(0, 0, slice(1, 4))), [0, 0, 1]) assert_allclose(m.exclude(data, wcs, view=(0, 0, slice(1, 4))), [1, 1, 0]) assert_allclose(m._filled(data, wcs, view=(0, 0, slice(1, 4))), [np.nan, np.nan, 3]) assert_allclose(m._flattened(data, wcs, view=(0, 0, slice(1, 4))), [3]) # Now if we call with different data, the results for include and exclude # should *not* change. data = (3 - np.arange(5)).reshape((1, 1, 5)) assert_allclose(m.include(data, wcs), [[[0, 0, 0, 1, 1]]]) assert_allclose(m.exclude(data, wcs), [[[1, 1, 1, 0, 0]]]) assert_allclose(m._filled(data, wcs), [[[np.nan, np.nan, np.nan, 0, -1]]]) assert_allclose(m._flattened(data, wcs), [0, -1]) assert_allclose(m.include(data, wcs, view=(0, 0, slice(1, 4))), [0, 0, 1]) assert_allclose(m.exclude(data, wcs, view=(0, 0, slice(1, 4))), [1, 1, 0]) assert_allclose(m._filled(data, wcs, view=(0, 0, slice(1, 4))), [np.nan, np.nan, 0]) assert_allclose(m._flattened(data, wcs, view=(0, 0, slice(1, 4))), [0]) def test_lazy_comparison_mask(): data = np.arange(5).reshape((1, 1, 5)) wcs = WCS() m = LazyComparisonMask(operator.gt, 2, data=data, wcs=wcs) assert_allclose(m.include(data, wcs), [[[0, 0, 0, 1, 1]]]) assert_allclose(m.exclude(data, wcs), [[[1, 1, 1, 0, 0]]]) assert_allclose(m._filled(data, wcs), [[[np.nan, np.nan, np.nan, 3, 4]]]) assert_allclose(m._flattened(data, wcs), [3, 4]) assert_allclose(m.include(data, wcs, view=(0, 0, slice(1, 4))), [0, 0, 1]) assert_allclose(m.exclude(data, wcs, view=(0, 0, slice(1, 4))), [1, 1, 0]) assert_allclose(m._filled(data, wcs, view=(0, 0, slice(1, 4))), [np.nan, np.nan, 3]) assert_allclose(m._flattened(data, wcs, view=(0, 0, slice(1, 4))), [3]) # Now if we call with different data, the results for include and exclude # should *not* change. data = (3 - np.arange(5)).reshape((1, 1, 5)) assert_allclose(m.include(data, wcs), [[[0, 0, 0, 1, 1]]]) assert_allclose(m.exclude(data, wcs), [[[1, 1, 1, 0, 0]]]) assert_allclose(m._filled(data, wcs), [[[np.nan, np.nan, np.nan, 0, -1]]]) assert_allclose(m._flattened(data, wcs), [0, -1]) assert_allclose(m.include(data, wcs, view=(0, 0, slice(1, 4))), [0, 0, 1]) assert_allclose(m.exclude(data, wcs, view=(0, 0, slice(1, 4))), [1, 1, 0]) assert_allclose(m._filled(data, wcs, view=(0, 0, slice(1, 4))), [np.nan, np.nan, 0]) assert_allclose(m._flattened(data, wcs, view=(0, 0, slice(1, 4))), [0]) def test_function_mask_incorrect_shape(): # The following function will return the incorrect shape because it does # not apply the view def threshold(data, wcs, view=()): return data > 2 m = FunctionMask(threshold) data = np.arange(5).reshape((1, 1, 5)) wcs = WCS() with pytest.raises(ValueError) as exc: m.include(data, wcs, view=(0, 0, slice(1, 4))) assert exc.value.args[0] == "Function did not return mask with correct shape - expected (3,), got (1, 1, 5)" def test_function_mask(): def threshold(data, wcs, view=()): return data[view] > 2 m = FunctionMask(threshold) data = np.arange(5).reshape((1, 1, 5)) wcs = WCS() assert_allclose(m.include(data, wcs), [[[0, 0, 0, 1, 1]]]) assert_allclose(m.exclude(data, wcs), [[[1, 1, 1, 0, 0]]]) assert_allclose(m._filled(data, wcs), [[[np.nan, np.nan, np.nan, 3, 4]]]) assert_allclose(m._flattened(data, wcs), [3, 4]) assert_allclose(m.include(data, wcs, view=(0, 0, slice(1, 4))), [0, 0, 1]) assert_allclose(m.exclude(data, wcs, view=(0, 0, slice(1, 4))), [1, 1, 0]) assert_allclose(m._filled(data, wcs, view=(0, 0, slice(1, 4))), [np.nan, np.nan, 3]) assert_allclose(m._flattened(data, wcs, view=(0, 0, slice(1, 4))), [3]) # Now if we call with different data, the results for include and exclude # *should* change. data = (3 - np.arange(5)).reshape((1, 1, 5)) assert_allclose(m.include(data, wcs), [[[1, 0, 0, 0, 0]]]) assert_allclose(m.exclude(data, wcs), [[[0, 1, 1, 1, 1]]]) assert_allclose(m._filled(data, wcs), [[[3, np.nan, np.nan, np.nan, np.nan]]]) assert_allclose(m._flattened(data, wcs), [3]) assert_allclose(m.include(data, wcs, view=(0, 0, slice(0, 3))), [1, 0, 0]) assert_allclose(m.exclude(data, wcs, view=(0, 0, slice(0, 3))), [0, 1, 1]) assert_allclose(m._filled(data, wcs, view=(0, 0, slice(0, 3))), [3, np.nan, np.nan]) assert_allclose(m._flattened(data, wcs, view=(0, 0, slice(0, 3))), [3]) def test_composite_mask(): def lower_threshold(data, wcs, view=()): return data[view] > 0 def upper_threshold(data, wcs, view=()): return data[view] < 3 m1 = FunctionMask(lower_threshold) m2 = FunctionMask(upper_threshold) m = m1 & m2 data = np.arange(5).reshape((1, 1, 5)) wcs = WCS() assert_allclose(m.include(data, wcs), [[[0, 1, 1, 0, 0]]]) assert_allclose(m.exclude(data, wcs), [[[1, 0, 0, 1, 1]]]) assert_allclose(m._filled(data, wcs), [[[np.nan, 1, 2, np.nan, np.nan]]]) assert_allclose(m._flattened(data, wcs), [1, 2]) assert_allclose(m.include(data, wcs, view=(0, 0, slice(1, 4))), [1, 1, 0]) assert_allclose(m.exclude(data, wcs, view=(0, 0, slice(1, 4))), [0, 0, 1]) assert_allclose(m._filled(data, wcs, view=(0, 0, slice(1, 4))), [1, 2, np.nan]) assert_allclose(m._flattened(data, wcs, view=(0, 0, slice(1, 4))), [1, 2]) def test_mask_logic(): data = np.arange(5).reshape((1, 1, 5)) wcs = WCS() def threshold_1(data, wcs, view=()): return data[view] > 0 def threshold_2(data, wcs, view=()): return data[view] < 4 def threshold_3(data, wcs, view=()): return data[view] != 2 m1 = FunctionMask(threshold_1) m2 = FunctionMask(threshold_2) m3 = FunctionMask(threshold_3) m = m1 & m2 assert_allclose(m.include(data, wcs), [[[0, 1, 1, 1, 0]]]) m = m1 | m2 assert_allclose(m.include(data, wcs), [[[1, 1, 1, 1, 1]]]) m = m1 | ~m2 assert_allclose(m.include(data, wcs), [[[0, 1, 1, 1, 1]]]) m = m1 & m2 & m3 assert_allclose(m.include(data, wcs), [[[0, 1, 0, 1, 0]]]) m = (m1 | m3) & m2 assert_allclose(m.include(data, wcs), [[[1, 1, 1, 1, 0]]]) m = m1 ^ m2 assert_allclose(m.include(data, wcs), [[[1, 0, 0, 0, 1]]]) m = m1 ^ m3 assert_allclose(m.include(data, wcs), [[[1, 0, 1, 0, 0]]]) @pytest.fixture def filename(request): return request.getfixturevalue(request.param) @pytest.mark.parametrize(('filename'), (('data_advs'), ('data_dvsa'), ('data_sdav'), ('data_sadv'), ('data_vsad'), ('data_vad'), ('data_adv'), ), indirect=['filename']) def test_mask_spectral_unit(filename, use_dask): cube, data = cube_and_raw(filename, use_dask=use_dask) mask = BooleanArrayMask(data, cube._wcs) mask_freq = mask.with_spectral_unit(u.Hz) assert mask_freq._wcs.wcs.ctype[mask_freq._wcs.wcs.spec] == 'FREQ-W2F' # values taken from header rest = 1.42040571841E+09*u.Hz crval = -3.21214698632E+05*u.m/u.s outcv = crval.to(u.m, u.doppler_optical(rest)).to(u.Hz, u.spectral()) assert_allclose(mask_freq._wcs.wcs.crval[mask_freq._wcs.wcs.spec], outcv.to(u.Hz).value) def test_wcs_validity_check(data_adv, use_dask): cube, data = cube_and_raw(data_adv, use_dask=use_dask) mask = BooleanArrayMask(data > 0, cube._wcs) cube = cube.with_mask(mask) s2 = cube.spectral_slab(-2 * u.km / u.s, 2 * u.km / u.s) s3 = s2.with_spectral_unit(u.km / u.s, velocity_convention=u.doppler_radio) # just checking that this works, not that it does anything in particular moment_map = s3.moment(order=1) def test_wcs_validity_check_failure(data_adv, use_dask): cube, data = cube_and_raw(data_adv, use_dask=use_dask) assert cube.wcs.wcs.crval[2] == -3.21214698632E+05 wcs2 = cube.wcs.deepcopy() # add some difference in the 5th decimal place wcs2.wcs.crval[2] += 0.00001 assert wcs2.wcs.crval[2] == -3.21214698622E+05 assert cube.wcs.wcs.crval[2] == -3.21214698632E+05 # can make a mask mask = BooleanArrayMask(data>0, wcs2) assert cube.wcs.wcs.crval[2] != wcs2.wcs.crval[2] assert cube._wcs.wcs.crval[2] != wcs2.wcs.crval[2] # but if it's not exactly equal, an error should be raised at this step with pytest.raises(ValueError, match="WCS does not match mask WCS"): cube.with_mask(mask) # this one should work though cube = cube.with_mask(mask, wcs_tolerance=1e-4) assert cube._wcs_tolerance == 1e-4 # then the rest of this should be OK s2 = cube.spectral_slab(-2 * u.km / u.s, 2 * u.km / u.s) s3 = s2.with_spectral_unit(u.km / u.s, velocity_convention=u.doppler_radio) # just checking that this works, not that it does anything in particular moment_map = s3.moment(order=1) def test_mask_spectral_unit_functions(data_adv, use_dask): cube, data = cube_and_raw(data_adv, use_dask=use_dask) # function mask should do nothing mask1 = FunctionMask(lambda x: x>0) mask_freq1 = mask1.with_spectral_unit(u.Hz) # lazy mask behaves like booleanarraymask mask2 = LazyMask(lambda x: x>0, cube=cube) mask_freq2 = mask2.with_spectral_unit(u.Hz) assert mask_freq2._wcs.wcs.ctype[mask_freq2._wcs.wcs.spec] == 'FREQ-W2F' # values taken from header rest = 1.42040571841E+09*u.Hz crval = -3.21214698632E+05*u.m/u.s outcv = crval.to(u.m, u.doppler_optical(rest)).to(u.Hz, u.spectral()) assert_allclose(mask_freq2._wcs.wcs.crval[mask_freq2._wcs.wcs.spec], outcv.to(u.Hz).value) # again, test that it works mask3 = CompositeMask(mask1,mask2) mask_freq3 = mask3.with_spectral_unit(u.Hz) mask_freq3 = CompositeMask(mask_freq1,mask_freq2) mask_freq_freq3 = mask_freq3.with_spectral_unit(u.Hz) # this one should fail #failedmask = CompositeMask(mask_freq1,mask2) def is_broadcastable_try(shp1, shp2): """ Test whether an array shape can be broadcast to another (this is the try/fail approach, which is guaranteed right.... right?) http://stackoverflow.com/questions/24743753/test-if-an-array-is-broadcastable-to-a-shape/24745359#24745359 """ #This variant does not work as of np 1.10: the strided arrays aren't #writable and therefore apparently cannot be broadcast # x = np.array([1]) # a = as_strided(x, shape=shp1, strides=[0] * len(shp1)) # b = as_strided(x, shape=shp2, strides=[0] * len(shp2)) a = np.ones(shp1) b = np.ones(shp2) try: c = np.broadcast_arrays(a, b) # reverse order: compare last dim first (as broadcasting does) if any(bi med mcube = cube.with_mask(mask) assert all(mask[:,1,1].include() == mask.include()[:,1,1]) spec = mcube[:,1,1] assert spec.ndim == 1 assert all(spec.mask.include() == mask.include()[:,1,1]) assert spec[:-1].mask.include().shape == (3,) assert all(spec[:-1].mask.include() == mask.include()[:-1,1,1]) assert isinstance(spec[0], u.Quantity) spec = mcube[:-1,1,1] assert spec.ndim == 1 assert hasattr(spec, '_fill_value') assert all(spec.mask.include() == mask.include()[:-1,1,1]) assert spec[:-1].mask.include().shape == (2,) assert all(spec[:-1].mask.include() == mask.include()[:-2,1,1]) assert isinstance(spec[0], u.Quantity) def test_1dcomparison_mask_1d_index(data_adv, use_dask): cube, data = cube_and_raw(data_adv, use_dask=use_dask) med = cube.median() mask = cube > med mcube = cube.with_mask(mask) assert all(mask[:,1,1].include() == mask.include()[:,1,1]) spec = mcube[:,1,1] assert spec.ndim == 1 assert all(spec.mask.include() == [True,False,False,True]) assert spec[:-1].mask.include().shape == (3,) assert all(spec[:-1].mask.include() == [True,False,False]) assert isinstance(spec[0], u.Quantity) def test_1dmask_indexing(data_adv, use_dask): cube, data = cube_and_raw(data_adv, use_dask=use_dask) med = cube.median() mask = cube > med mcube = cube.with_mask(mask) assert all(mask[:,1,1].include() == mask.include()[:,1,1]) spec = mcube[:,1,1] badvals = np.array([False,True,True,False], dtype='bool') assert np.all(np.isnan(spec[badvals])) assert not np.any(np.isnan(spec[~badvals])) def test_numpy_ma_tools(data_adv, use_dask): """ check that np.ma.core.is_masked works """ cube, data = cube_and_raw(data_adv, use_dask=use_dask) med = cube.median() mask = cube > med mcube = cube.with_mask(mask) assert np.ma.core.is_masked(mcube) assert np.ma.core.getmask(mcube) is not None assert np.ma.core.is_masked(mcube[:,0,0]) assert np.ma.core.getmask(mcube[:,0,0]) is not None @pytest.mark.xfail def test_numpy_ma_tools_2d(data_adv, use_dask): """ This depends on 2D objects keeping masks, which depends on #395. so, TODO: un-xfail this """ cube, data = cube_and_raw(data_adv, use_dask=use_dask) med = cube.median() mask = cube > med mcube = cube.with_mask(mask) assert np.ma.core.is_masked(mcube[0,:,:]) assert np.ma.core.getmask(mcube[0,:,:]) is not None def test_filled(data_adv, use_dask): """ test that 'filled' works """ cube, data = cube_and_raw(data_adv, use_dask=use_dask) med = cube.median() mask = cube > med mcube = cube.with_mask(mask) assert np.isnan(mcube._fill_value) filled = mcube.filled(np.nan) filled_ = mcube.filled() assert_allclose(filled, filled_) assert (np.isnan(filled) == mcube.mask.exclude()).all() def test_boolean_array_composite_mask(data_adv, use_dask): cube, data = cube_and_raw(data_adv, use_dask=use_dask) med = cube.median() mask = cube > med arrmask = cube.max(axis=0) > med # we're just testing that this doesn't fail combined_mask = mask & arrmask mcube = cube.with_mask(combined_mask) # not doing assert_almost_equal because I don't want to worry about precision assert (mcube.sum() > 9.0 * u.K) & (mcube.sum() < 9.1*u.K) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1633017852.0 spectral-cube-0.6.0/spectral_cube/tests/test_moments.py0000644000175100001710000001610300000000000022657 0ustar00runnerdockerfrom __future__ import print_function, absolute_import, division import warnings from distutils.version import LooseVersion import pytest import numpy as np import astropy from astropy.wcs import WCS from astropy import units as u from astropy.io import fits from ..spectral_cube import SpectralCube, VarianceWarning from .helpers import assert_allclose # the back of the book dv = 3e-2 * u.Unit('m/s') dy = 2e-5 * u.Unit('deg') dx = 1e-5 * u.Unit('deg') data_unit = u.K m0v = np.array([[27, 30, 33], [36, 39, 42], [45, 48, 51]]) * data_unit * dv m0y = np.array([[9, 12, 15], [36, 39, 42], [63, 66, 69]]) * data_unit * dy m0x = np.array([[3, 12, 21], [30, 39, 48], [57, 66, 75]]) * data_unit * dx # M1V is a special case, where we return the actual coordinate m1v = np.array([[1.66666667, 1.6, 1.54545455], [1.5, 1.46153846, 1.42857143], [1.4, 1.375, 1.35294118]]) * dv + 2 * u.Unit('m/s') m1y = np.array([[1.66666667, 1.5, 1.4], [1.16666667, 1.15384615, 1.14285714], [1.0952381, 1.09090909, 1.08695652]]) * dy m1x = np.array([[1.66666667, 1.16666667, 1.0952381], [1.06666667, 1.05128205, 1.04166667], [1.03508772, 1.03030303, 1.02666667]]) * dx m2v = np.array([[0.22222222, 0.30666667, 0.36914601], [0.41666667, 0.45364892, 0.4829932], [0.50666667, 0.52604167, 0.54209919]]) * dv ** 2 m2y = np.array([[0.22222222, 0.41666667, 0.50666667], [0.63888889, 0.64299803, 0.6462585], [0.65759637, 0.6584022, 0.65910523]]) * dy ** 2 m2x = np.array([[0.22222222, 0.63888889, 0.65759637], [0.66222222, 0.66403682, 0.66493056], [0.66543552, 0.66574839, 0.66595556]]) * dx ** 2 MOMENTS = [[m0v, m0y, m0x], [m1v, m1y, m1x], [m2v, m2y, m2x]] # In issue 184, the cubes were corrected such that they all have valid units # Therefore, no separate tests are needed for moments-with-units and those # without MOMENTSu = MOMENTS def moment_cube(): cube = np.arange(27).reshape([3, 3, 3]).astype(np.float) wcs = WCS(naxis=3) wcs.wcs.ctype = ['RA---TAN', 'DEC--TAN', 'VELO'] # choose values to minimize spherical distortions wcs.wcs.cdelt = np.array([-1, 2, 3], dtype='float32') / 1e5 wcs.wcs.crpix = np.array([1, 1, 1], dtype='float32') wcs.wcs.crval = np.array([0, 1e-3, 2e-3], dtype='float32') wcs.wcs.cunit = ['deg', 'deg', 'km/s'] header = wcs.to_header() header['BUNIT'] = 'K' hdu = fits.PrimaryHDU(data=cube, header=header) return hdu axis_order = pytest.mark.parametrize(('axis', 'order'), ((0, 0), (0, 1), (0, 2), (1, 0), (1, 1), (1, 2), (2, 0), (2, 1), (2, 2))) if LooseVersion(astropy.__version__[:3]) >= LooseVersion('1.0'): # The relative error is slightly larger on astropy-dev # There is no obvious reason for this. rtol = 2e-7 atol = 1e-30 else: rtol = 1e-7 atol = 0.0 @axis_order def test_strategies_consistent(axis, order, use_dask): mc_hdu = moment_cube() sc = SpectralCube.read(mc_hdu, use_dask=use_dask) cwise = sc.moment(axis=axis, order=order, how='cube') swise = sc.moment(axis=axis, order=order, how='slice') rwise = sc.moment(axis=axis, order=order, how='ray') assert_allclose(cwise, swise, rtol=rtol, atol=atol) assert_allclose(cwise, rwise, rtol=rtol, atol=atol) @pytest.mark.parametrize(('order', 'axis', 'how'), [(o, a, h) for o in [0, 1, 2] for a in [0, 1, 2] for h in ['cube', 'slice', 'auto', 'ray']]) def test_reference(order, axis, how, use_dask): mc_hdu = moment_cube() sc = SpectralCube.read(mc_hdu, use_dask=use_dask) mom_sc = sc.moment(order=order, axis=axis, how=how) assert_allclose(mom_sc, MOMENTS[order][axis]) @axis_order def test_consistent_mask_handling(axis, order, use_dask): mc_hdu = moment_cube() sc = SpectralCube.read(mc_hdu, use_dask=use_dask) sc._mask = sc > 4*u.K cwise = sc.moment(axis=axis, order=order, how='cube') swise = sc.moment(axis=axis, order=order, how='slice') rwise = sc.moment(axis=axis, order=order, how='ray') assert_allclose(cwise, swise, rtol=rtol, atol=atol) assert_allclose(cwise, rwise, rtol=rtol, atol=atol) def test_convenience_methods(use_dask): mc_hdu = moment_cube() sc = SpectralCube.read(mc_hdu, use_dask=use_dask) assert_allclose(sc.moment0(axis=0), MOMENTS[0][0]) assert_allclose(sc.moment1(axis=2), MOMENTS[1][2]) assert_allclose(sc.moment2(axis=1), MOMENTS[2][1]) def test_linewidth(use_dask): mc_hdu = moment_cube() sc = SpectralCube.read(mc_hdu, use_dask=use_dask) with warnings.catch_warnings(record=True) as w: assert_allclose(sc.moment2(), MOMENTS[2][0]) assert len(w) == 1 assert w[0].category == VarianceWarning assert str(w[0].message) == ("Note that the second moment returned will be a " "variance map. To get a linewidth map, use the " "SpectralCube.linewidth_fwhm() or " "SpectralCube.linewidth_sigma() methods instead.") with warnings.catch_warnings(record=True) as w: assert_allclose(sc.linewidth_sigma(), MOMENTS[2][0] ** 0.5) assert_allclose(sc.linewidth_fwhm(), MOMENTS[2][0] ** 0.5 * 2.3548200450309493) assert len(w) == 0 def test_preserve_unit(use_dask): mc_hdu = moment_cube() sc = SpectralCube.read(mc_hdu, use_dask=use_dask) sc_kms = sc.with_spectral_unit(u.km/u.s) m0 = sc_kms.moment0(axis=0) m1 = sc_kms.moment1(axis=0) assert_allclose(m0, MOMENTS[0][0].to(u.K*u.km/u.s)) assert_allclose(m1, MOMENTS[1][0].to(u.km/u.s)) def test_with_flux_unit(use_dask): """ As of Issue 184, redundant with test_reference """ mc_hdu = moment_cube() sc = SpectralCube.read(mc_hdu, use_dask=use_dask) sc._unit = u.K sc_kms = sc.with_spectral_unit(u.km/u.s) m0 = sc_kms.moment0(axis=0) m1 = sc_kms.moment1(axis=0) assert sc.unit == u.K assert sc.filled_data[:].unit == u.K assert_allclose(m0, MOMENTS[0][0].to(u.K*u.km/u.s)) assert_allclose(m1, MOMENTS[1][0].to(u.km/u.s)) @pytest.mark.parametrize(('order', 'axis', 'how'), [(o, a, h) for o in [0, 1, 2] for a in [0, 1, 2] for h in ['cube', 'slice', 'auto', 'ray']]) def test_how_withfluxunit(order, axis, how, use_dask): """ Regression test for issue 180 As of issue 184, this is mostly redundant with test_reference except that it (kind of) checks that units are set """ mc_hdu = moment_cube() sc = SpectralCube.read(mc_hdu, use_dask=use_dask) sc._unit = u.K mom_sc = sc.moment(order=order, axis=axis, how=how) assert sc.unit == u.K assert sc.filled_data[:].unit == u.K assert_allclose(mom_sc, MOMENTSu[order][axis]) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1633017852.0 spectral-cube-0.6.0/spectral_cube/tests/test_performance.py0000644000175100001710000001724500000000000023506 0ustar00runnerdocker""" Performance-related tests to make sure we don't use more memory than we should. For now this is just for SpectralCube, not DaskSpectralCube. """ from __future__ import print_function, absolute_import, division import numpy as np import pytest import tempfile import sys try: import tracemalloc tracemallocOK = True except ImportError: tracemallocOK = False # The comparison of Quantities in test_memory_usage # fail with older versions of numpy from distutils.version import LooseVersion NPY_VERSION_CHECK = LooseVersion(np.version.version) >= "1.13" from .test_moments import moment_cube from .helpers import assert_allclose from ..spectral_cube import SpectralCube from . import utilities from astropy import convolution, units as u WINDOWS = sys.platform == "win32" def find_base_nbytes(obj): # from http://stackoverflow.com/questions/34637875/size-of-numpy-strided-array-broadcast-array-in-memory if obj.base is not None: return find_base_nbytes(obj.base) return obj.nbytes def test_pix_size(): mc_hdu = moment_cube() sc = SpectralCube.read(mc_hdu) s,y,x = sc._pix_size() # float64 by default bytes_per_pix = 8 assert find_base_nbytes(s) == sc.shape[0]*bytes_per_pix assert find_base_nbytes(y) == sc.shape[1]*sc.shape[2]*bytes_per_pix assert find_base_nbytes(x) == sc.shape[1]*sc.shape[2]*bytes_per_pix def test_compare_pix_size_approaches(): mc_hdu = moment_cube() sc = SpectralCube.read(mc_hdu) sa,ya,xa = sc._pix_size() s,y,x = (sc._pix_size_slice(ii) for ii in range(3)) assert_allclose(sa, s) assert_allclose(ya, y) assert_allclose(xa, x) def test_pix_cen(): mc_hdu = moment_cube() sc = SpectralCube.read(mc_hdu) s,y,x = sc._pix_cen() # float64 by default bytes_per_pix = 8 assert find_base_nbytes(s) == sc.shape[0]*bytes_per_pix assert find_base_nbytes(y) == sc.shape[1]*sc.shape[2]*bytes_per_pix assert find_base_nbytes(x) == sc.shape[1]*sc.shape[2]*bytes_per_pix @pytest.mark.skipif('True') def test_parallel_performance_smoothing(): import timeit setup = 'cube,_ = utilities.generate_gaussian_cube(shape=(300,64,64))' stmt = 'result = cube.spectral_smooth(kernel=convolution.Gaussian1DKernel(20.0), num_cores={0}, use_memmap=False)' rslt = {} for ncores in (1,2,3,4): time = timeit.timeit(stmt=stmt.format(ncores), setup=setup, number=5, globals=globals()) rslt[ncores] = time print() print("memmap=False") print(rslt) setup = 'cube,_ = utilities.generate_gaussian_cube(shape=(300,64,64))' stmt = 'result = cube.spectral_smooth(kernel=convolution.Gaussian1DKernel(20.0), num_cores={0}, use_memmap=True)' rslt = {} for ncores in (1,2,3,4): time = timeit.timeit(stmt=stmt.format(ncores), setup=setup, number=5, globals=globals()) rslt[ncores] = time stmt = 'result = cube.spectral_smooth(kernel=convolution.Gaussian1DKernel(20.0), num_cores={0}, use_memmap=True, parallel=False)' rslt[0] = timeit.timeit(stmt=stmt.format(1), setup=setup, number=5, globals=globals()) print() print("memmap=True") print(rslt) if False: for shape in [(300,64,64), (600,64,64), (900,64,64), (300,128,128), (300,256,256), (900,256,256)]: setup = 'cube,_ = utilities.generate_gaussian_cube(shape={0})'.format(shape) stmt = 'result = cube.spectral_smooth(kernel=convolution.Gaussian1DKernel(20.0), num_cores={0}, use_memmap=True)' rslt = {} for ncores in (1,2,3,4): time = timeit.timeit(stmt=stmt.format(ncores), setup=setup, number=5, globals=globals()) rslt[ncores] = time stmt = 'result = cube.spectral_smooth(kernel=convolution.Gaussian1DKernel(20.0), num_cores={0}, use_memmap=True, parallel=False)' rslt[0] = timeit.timeit(stmt=stmt.format(1), setup=setup, number=5, globals=globals()) print() print("memmap=True shape={0}".format(shape)) print(rslt) # python 2.7 doesn't have tracemalloc @pytest.mark.skipif('not tracemallocOK or (sys.version_info.major==3 and sys.version_info.minor<6) or not NPY_VERSION_CHECK or WINDOWS') def test_memory_usage(): """ Make sure that using memmaps happens where expected, for the most part, and that memory doesn't get overused. """ ntf = tempfile.NamedTemporaryFile() tracemalloc.start() snap1 = tracemalloc.take_snapshot() # create a 64 MB cube cube,_ = utilities.generate_gaussian_cube(shape=[200,200,200]) sz = _.dtype.itemsize snap1b = tracemalloc.take_snapshot() diff = snap1b.compare_to(snap1, 'lineno') diffvals = np.array([dd.size_diff for dd in diff]) # at this point, the generated cube should still exist in memory assert diffvals.max()*u.B >= 200**3*sz*u.B del _ snap2 = tracemalloc.take_snapshot() diff = snap2.compare_to(snap1b, 'lineno') assert diff[0].size_diff*u.B < -0.3*u.MB cube.write(ntf.name, format='fits') # writing the cube should not occupy any more memory snap3 = tracemalloc.take_snapshot() diff = snap3.compare_to(snap2, 'lineno') assert sum([dd.size_diff for dd in diff])*u.B < 100*u.kB del cube # deleting the cube should remove the 64 MB from memory snap4 = tracemalloc.take_snapshot() diff = snap4.compare_to(snap3, 'lineno') assert diff[0].size_diff*u.B < -200**3*sz*u.B cube = SpectralCube.read(ntf.name, format='fits') # reading the cube from filename on disk should result in no increase in # memory use snap5 = tracemalloc.take_snapshot() diff = snap5.compare_to(snap4, 'lineno') assert diff[0].size_diff*u.B < 1*u.MB mask = cube.mask.include() snap6 = tracemalloc.take_snapshot() diff = snap6.compare_to(snap5, 'lineno') assert diff[0].size_diff*u.B >= mask.size*u.B filled_data = cube._get_filled_data(use_memmap=True) snap7 = tracemalloc.take_snapshot() diff = snap7.compare_to(snap6, 'lineno') assert diff[0].size_diff*u.B < 100*u.kB filled_data = cube._get_filled_data(use_memmap=False) snap8 = tracemalloc.take_snapshot() diff = snap8.compare_to(snap7, 'lineno') assert diff[0].size_diff*u.B > 10*u.MB del filled_data # cube is <1e8 bytes, so this is use_memmap=False filled_data = cube.filled_data[:] snap9 = tracemalloc.take_snapshot() diff = snap9.compare_to(snap6, 'lineno') assert diff[0].size_diff*u.B > 10*u.MB # python 2.7 doesn't have tracemalloc @pytest.mark.skipif('not tracemallocOK or (sys.version_info.major==3 and sys.version_info.minor<6) or not NPY_VERSION_CHECK') def test_memory_usage_coordinates(): """ Watch out for high memory usage on huge spatial files """ ntf = tempfile.NamedTemporaryFile() tracemalloc.start() snap1 = tracemalloc.take_snapshot() size = 200 # create a "flat" cube cube,_ = utilities.generate_gaussian_cube(shape=[1,size,size]) sz = _.dtype.itemsize snap1b = tracemalloc.take_snapshot() diff = snap1b.compare_to(snap1, 'lineno') diffvals = np.array([dd.size_diff for dd in diff]) # at this point, the generated cube should still exist in memory assert diffvals.max()*u.B >= size**2*sz*u.B del _ snap2 = tracemalloc.take_snapshot() diff = snap2.compare_to(snap1b, 'lineno') assert diff[0].size_diff*u.B < -0.3*u.MB print(cube) # printing the cube should not occupy any more memory # (it will allocate a few bytes for the cache, but should *not* # load the full size x size coordinate arrays for RA, Dec snap3 = tracemalloc.take_snapshot() diff = snap3.compare_to(snap2, 'lineno') assert sum([dd.size_diff for dd in diff])*u.B < 100*u.kB ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1633017852.0 spectral-cube-0.6.0/spectral_cube/tests/test_projection.py0000644000175100001710000006376000000000000023364 0ustar00runnerdockerfrom __future__ import print_function, absolute_import, division import warnings import pytest import numpy as np from astropy import units as u from astropy.wcs import WCS from astropy.io import fits from radio_beam import Beam, Beams from .helpers import assert_allclose from .test_spectral_cube import cube_and_raw from ..spectral_cube import SpectralCube from ..masks import BooleanArrayMask from ..lower_dimensional_structures import (Projection, Slice, OneDSpectrum, VaryingResolutionOneDSpectrum) from ..utils import SliceWarning, WCSCelestialError, BeamUnitsError from . import path # set up for parametrization LDOs = (Projection, Slice, OneDSpectrum) LDOs_2d = (Projection, Slice,) two_qty_2d = np.ones((2,2)) * u.Jy twelve_qty_2d = np.ones((12,12)) * u.Jy two_qty_1d = np.ones((2,)) * u.Jy twelve_qty_1d = np.ones((12,)) * u.Jy data_two = (two_qty_2d, two_qty_2d, two_qty_1d) data_twelve = (twelve_qty_2d, twelve_qty_2d, twelve_qty_1d) data_two_2d = (two_qty_2d, two_qty_2d,) data_twelve_2d = (twelve_qty_2d, twelve_qty_2d,) def load_projection(filename): hdu = fits.open(filename)[0] proj = Projection.from_hdu(hdu) return proj, hdu @pytest.mark.parametrize(('LDO', 'data'), zip(LDOs_2d, data_two_2d)) def test_slices_of_projections_not_projections(LDO, data): # slices of projections that have <2 dimensions should not be projections p = LDO(data, copy=False) assert not isinstance(p[0,0], LDO) assert not isinstance(p[0], LDO) @pytest.mark.parametrize(('LDO', 'data'), zip(LDOs_2d, data_twelve_2d)) def test_copy_false(LDO, data): # copy the data so we can manipulate inplace without affecting other tests image = data.copy() p = LDO(image, copy=False) image[3,4] = 2 * u.Jy assert_allclose(p[3,4], 2 * u.Jy) @pytest.mark.parametrize(('LDO', 'data'), zip(LDOs, data_twelve)) def test_write(LDO, data, tmpdir): p = LDO(data) p.write(tmpdir.join('test.fits').strpath) @pytest.mark.parametrize(('LDO', 'data'), zip(LDOs_2d, data_twelve_2d)) def test_preserve_wcs_to(LDO, data): # regression for #256 image = data.copy() p = LDO(image, copy=False) image[3,4] = 2 * u.Jy p2 = p.to(u.mJy) assert_allclose(p[3,4], 2 * u.Jy) assert_allclose(p[3,4], 2000 * u.mJy) assert p2.wcs == p.wcs @pytest.mark.parametrize(('LDO', 'data'), zip(LDOs, data_twelve)) def test_multiplication(LDO, data): # regression: 265 p = LDO(data, copy=False) p2 = p * 5 assert p2.unit == u.Jy assert hasattr(p2, '_wcs') assert p2.wcs == p.wcs assert np.all(p2.value == 5) @pytest.mark.parametrize(('LDO', 'data'), zip(LDOs, data_twelve)) def test_unit_division(LDO, data): # regression: 265 image = data p = LDO(image, copy=False) p2 = p / u.beam assert p2.unit == u.Jy/u.beam assert hasattr(p2, '_wcs') assert p2.wcs == p.wcs @pytest.mark.parametrize(('LDO', 'data'), zip(LDOs_2d, data_twelve_2d)) def test_isnan(LDO, data): # Check that np.isnan strips units image = data.copy() image[5,6] = np.nan p = LDO(image, copy=False) mask = np.isnan(p) assert mask.sum() == 1 assert not hasattr(mask, 'unit') @pytest.mark.parametrize(('LDO', 'data'), zip(LDOs, data_twelve)) def test_self_arith(LDO, data): image = data p = LDO(image, copy=False) p2 = p + p assert hasattr(p2, '_wcs') assert p2.wcs == p.wcs assert np.all(p2.value==2) p2 = p - p assert hasattr(p2, '_wcs') assert p2.wcs == p.wcs assert np.all(p2.value==0) @pytest.mark.parametrize(('LDO', 'data'), zip(LDOs, data_twelve)) def test_self_arith_with_beam(LDO, data): exp_beam = Beam(1.0 * u.arcsec) image = data p = LDO(image, copy=False) p = p.with_beam(exp_beam) p2 = p + p assert hasattr(p2, '_wcs') assert p2.wcs == p.wcs assert np.all(p2.value==2) assert p2.beam == exp_beam p2 = p - p assert hasattr(p2, '_wcs') assert p2.wcs == p.wcs assert np.all(p2.value==0) assert p2.beam == exp_beam @pytest.mark.xfail(raises=ValueError, strict=True) def test_VRODS_wrong_beams_shape(): ''' Check that passing Beams with a different shape than the data is caught. ''' exp_beams = Beams(np.arange(1, 4) * u.arcsec) p = VaryingResolutionOneDSpectrum(twelve_qty_1d, copy=False, beams=exp_beams) def test_VRODS_with_beams(): exp_beams = Beams(np.arange(1, twelve_qty_1d.size + 1) * u.arcsec) p = VaryingResolutionOneDSpectrum(twelve_qty_1d, copy=False, beams=exp_beams) assert (p.beams == exp_beams).all() new_beams = Beams(np.arange(2, twelve_qty_1d.size + 2) * u.arcsec) p = p.with_beams(new_beams) assert np.all(p.beams == new_beams) def test_VRODS_slice_with_beams(): exp_beams = Beams(np.arange(1, twelve_qty_1d.size + 1) * u.arcsec) p = VaryingResolutionOneDSpectrum(twelve_qty_1d, copy=False, wcs=WCS(naxis=1), beams=exp_beams) assert np.all(p[:5].beams == exp_beams[:5]) def test_VRODS_arith_with_beams(): exp_beams = Beams(np.arange(1, twelve_qty_1d.size + 1) * u.arcsec) p = VaryingResolutionOneDSpectrum(twelve_qty_1d, copy=False, beams=exp_beams) p2 = p + p assert hasattr(p2, '_wcs') assert p2.wcs == p.wcs assert np.all(p2.value==2) assert np.all(p2.beams == exp_beams) p2 = p - p assert hasattr(p2, '_wcs') assert p2.wcs == p.wcs assert np.all(p2.value==0) assert np.all(p2.beams == exp_beams) def test_onedspectrum_specaxis_units(): test_wcs = WCS(naxis=1) test_wcs.wcs.cunit = ["m/s"] test_wcs.wcs.ctype = ["VELO-LSR"] p = OneDSpectrum(twelve_qty_1d, wcs=test_wcs) assert p.spectral_axis.unit == u.Unit("m/s") def test_onedspectrum_with_spectral_unit(): test_wcs = WCS(naxis=1) test_wcs.wcs.cunit = ["m/s"] test_wcs.wcs.ctype = ["VELO-LSR"] p = OneDSpectrum(twelve_qty_1d, wcs=test_wcs) p_new = p.with_spectral_unit(u.km/u.s) assert p_new.spectral_axis.unit == u.Unit("km/s") np.testing.assert_equal(p_new.spectral_axis.value, 1e-3*p.spectral_axis.value) def test_onedspectrum_input_mask_type(): test_wcs = WCS(naxis=1) test_wcs.wcs.cunit = ["m/s"] test_wcs.wcs.ctype = ["VELO-LSR"] np_mask = np.ones(twelve_qty_1d.shape, dtype=bool) np_mask[1] = False bool_mask = BooleanArrayMask(np_mask, wcs=test_wcs, shape=np_mask.shape) # numpy array p = OneDSpectrum(twelve_qty_1d, wcs=test_wcs, mask=np_mask) assert (p.mask.include() == bool_mask.include()).all() # MaskBase p = OneDSpectrum(twelve_qty_1d, wcs=test_wcs, mask=bool_mask) assert (p.mask.include() == bool_mask.include()).all() # No mask ones_mask = BooleanArrayMask(np.ones(twelve_qty_1d.shape, dtype=bool), wcs=test_wcs, shape=np_mask.shape) p = OneDSpectrum(twelve_qty_1d, wcs=test_wcs, mask=None) assert (p.mask.include() == ones_mask.include()).all() def test_slice_tricks(): test_wcs_1 = WCS(naxis=1) test_wcs_2 = WCS(naxis=2) spec = OneDSpectrum(twelve_qty_1d, wcs=test_wcs_1) im = Slice(twelve_qty_2d, wcs=test_wcs_2) with warnings.catch_warnings(record=True) as w: new = spec[:,None,None] * im[None,:,:] assert new.ndim == 3 # two warnings because we're doing BOTH slices! assert len(w) == 2 assert w[0].category == SliceWarning with warnings.catch_warnings(record=True) as w: new = spec.array[:,None,None] * im.array[None,:,:] assert new.ndim == 3 assert len(w) == 0 def test_array_property(): test_wcs_1 = WCS(naxis=1) spec = OneDSpectrum(twelve_qty_1d, wcs=test_wcs_1) arr = spec.array # these are supposed to be the same object, but the 'is' tests fails! assert spec.array.data == spec.data assert isinstance(arr, np.ndarray) assert not isinstance(arr, u.Quantity) def test_quantity_property(): test_wcs_1 = WCS(naxis=1) spec = OneDSpectrum(twelve_qty_1d, wcs=test_wcs_1) arr = spec.quantity # these are supposed to be the same object, but the 'is' tests fails! assert spec.array.data == spec.data assert isinstance(arr, u.Quantity) assert not isinstance(arr, OneDSpectrum) def test_projection_with_beam(data_55): exp_beam = Beam(1.0 * u.arcsec) proj, hdu = load_projection(data_55) # uses from_hdu, which passes beam as kwarg assert proj.beam == exp_beam assert proj.meta['beam'] == exp_beam # load beam from meta exp_beam = Beam(1.5 * u.arcsec) meta = {"beam": exp_beam} new_proj = Projection(hdu.data, wcs=proj.wcs, meta=meta) assert new_proj.beam == exp_beam assert new_proj.meta['beam'] == exp_beam # load beam from given header exp_beam = Beam(2.0 * u.arcsec) header = hdu.header.copy() header = exp_beam.attach_to_header(header) new_proj = Projection(hdu.data, wcs=proj.wcs, header=header, read_beam=True) assert new_proj.beam == exp_beam assert new_proj.meta['beam'] == exp_beam # load beam from beam object exp_beam = Beam(3.0 * u.arcsec) header = hdu.header.copy() del header["BMAJ"], header["BMIN"], header["BPA"] new_proj = Projection(hdu.data, wcs=proj.wcs, header=header, beam=exp_beam) assert new_proj.beam == exp_beam assert new_proj.meta['beam'] == exp_beam # Slice the projection with a beam and check it's still there assert new_proj[:1, :1].beam == exp_beam def test_ondespectrum_with_beam(): exp_beam = Beam(1.0 * u.arcsec) test_wcs_1 = WCS(naxis=1) spec = OneDSpectrum(twelve_qty_1d, wcs=test_wcs_1) # load beam from meta meta = {"beam": exp_beam} new_spec = OneDSpectrum(spec.data, wcs=spec.wcs, meta=meta) assert new_spec.beam == exp_beam assert new_spec.meta['beam'] == exp_beam # load beam from given header hdu = spec.hdu exp_beam = Beam(2.0 * u.arcsec) header = hdu.header.copy() header = exp_beam.attach_to_header(header) new_spec = OneDSpectrum(hdu.data, wcs=spec.wcs, header=header, read_beam=True) assert new_spec.beam == exp_beam assert new_spec.meta['beam'] == exp_beam # load beam from beam object exp_beam = Beam(3.0 * u.arcsec) header = hdu.header.copy() new_spec = OneDSpectrum(hdu.data, wcs=spec.wcs, header=header, beam=exp_beam) assert new_spec.beam == exp_beam assert new_spec.meta['beam'] == exp_beam # Slice the spectrum with a beam and check it's still there assert new_spec[:1].beam == exp_beam @pytest.mark.parametrize(('LDO', 'data'), zip(LDOs, data_twelve)) def test_ldo_attach_beam(LDO, data): exp_beam = Beam(1.0 * u.arcsec) newbeam = Beam(2.0 * u.arcsec) p = LDO(data, copy=False, beam=exp_beam) new_p = p.with_beam(newbeam) assert p.beam == exp_beam assert p.meta['beam'] == exp_beam assert new_p.beam == newbeam assert new_p.meta['beam'] == newbeam @pytest.mark.xfail(raises=BeamUnitsError, strict=True) @pytest.mark.parametrize(('LDO', 'data'), zip(LDOs, data_twelve)) def test_ldo_attach_beam_jybm_error(LDO, data): exp_beam = Beam(1.0 * u.arcsec) newbeam = Beam(2.0 * u.arcsec) data = data.value * u.Jy / u.beam p = LDO(data, copy=False, beam=exp_beam) # Attaching with no beam should work. new_p = p.with_beam(newbeam) # Trying to change the beam should now raise a BeamUnitsError new_p = new_p.with_beam(newbeam) @pytest.mark.parametrize(('LDO', 'data'), zip(LDOs_2d, data_two_2d)) def test_projection_from_hdu(LDO, data): p = LDO(data, copy=False) hdu = p.hdu p_new = LDO.from_hdu(hdu) assert (p == p_new).all() def test_projection_subimage(data_55): proj, hdu = load_projection(data_55) proj1 = proj.subimage(xlo=1, xhi=3) proj2 = proj.subimage(xlo=24.06269 * u.deg, xhi=24.06206 * u.deg) proj3 = proj.subimage(xlo=24.06269*u.deg, xhi=3) proj4 = proj.subimage(xlo=1, xhi=24.06206*u.deg) assert proj1.shape == (5, 2) assert proj2.shape == (5, 2) assert proj3.shape == (5, 2) assert proj4.shape == (5, 2) assert proj1.wcs.wcs.compare(proj2.wcs.wcs) assert proj1.wcs.wcs.compare(proj3.wcs.wcs) assert proj1.wcs.wcs.compare(proj4.wcs.wcs) assert proj.beam == proj1.beam assert proj.beam == proj2.beam proj4 = proj.subimage(ylo=1, yhi=3) proj5 = proj.subimage(ylo=29.93464 * u.deg, yhi=29.93522 * u.deg) proj6 = proj.subimage(ylo=1, yhi=29.93522 * u.deg) proj7 = proj.subimage(ylo=29.93464 * u.deg, yhi=3) assert proj4.shape == (2, 5) assert proj5.shape == (2, 5) assert proj6.shape == (2, 5) assert proj7.shape == (2, 5) assert proj4.wcs.wcs.compare(proj5.wcs.wcs) assert proj4.wcs.wcs.compare(proj6.wcs.wcs) assert proj4.wcs.wcs.compare(proj7.wcs.wcs) # Test mixed slicing in both spatial directions proj1xy = proj.subimage(xlo=1, xhi=3, ylo=1, yhi=3) proj2xy = proj.subimage(xlo=24.06269*u.deg, xhi=3, ylo=1,yhi=29.93522 * u.deg) proj3xy = proj.subimage(xlo=1, xhi=24.06206*u.deg, ylo=29.93464 * u.deg, yhi=3) assert proj1xy.shape == (2, 2) assert proj2xy.shape == (2, 2) assert proj3xy.shape == (2, 2) assert proj1xy.wcs.wcs.compare(proj2xy.wcs.wcs) assert proj1xy.wcs.wcs.compare(proj3xy.wcs.wcs) proj5 = proj.subimage() assert proj5.shape == proj.shape assert proj5.wcs.wcs.compare(proj.wcs.wcs) assert np.all(proj5.value == proj.value) def test_projection_subimage_nocelestial_fail(data_255_delta, use_dask): cube, data = cube_and_raw(data_255_delta, use_dask=use_dask) proj = cube.moment0(axis=1) with pytest.raises(WCSCelestialError, match="WCS does not contain two spatial axes."): proj.subimage(xlo=1, xhi=3) @pytest.mark.parametrize('LDO', LDOs_2d) def test_twod_input_mask_type(LDO): test_wcs = WCS(naxis=2) test_wcs.wcs.cunit = ["deg", "deg"] test_wcs.wcs.ctype = ["RA---SIN", 'DEC--SIN'] np_mask = np.ones(twelve_qty_2d.shape, dtype=bool) np_mask[1] = False bool_mask = BooleanArrayMask(np_mask, wcs=test_wcs, shape=np_mask.shape) # numpy array p = LDO(twelve_qty_2d, wcs=test_wcs, mask=np_mask) assert (p.mask.include() == bool_mask.include()).all() # MaskBase p = LDO(twelve_qty_2d, wcs=test_wcs, mask=bool_mask) assert (p.mask.include() == bool_mask.include()).all() # No mask ones_mask = BooleanArrayMask(np.ones(twelve_qty_2d.shape, dtype=bool), wcs=test_wcs, shape=np_mask.shape) p = LDO(twelve_qty_2d, wcs=test_wcs, mask=None) assert (p.mask.include() == ones_mask.include()).all() @pytest.mark.xfail def test_mask_convolve(): # Numpy is fundamentally incompatible with the objects we have created. # np.ma.is_masked(array) checks specifically for the array's _mask # attribute. We would have to refactor deeply to correct this, and I # really don't want to do that because 'None' is a much more reasonable # and less dangerous default for a mask. test_wcs_1 = WCS(naxis=1) spec = OneDSpectrum(twelve_qty_1d, wcs=test_wcs_1) assert spec.mask is False from astropy.convolution import convolve,Box1DKernel convolve(spec, Box1DKernel(3)) def test_convolve(): test_wcs_1 = WCS(naxis=1) spec = OneDSpectrum(twelve_qty_1d, wcs=test_wcs_1) from astropy.convolution import Box1DKernel specsmooth = spec.spectral_smooth(Box1DKernel(1)) np.testing.assert_allclose(spec, specsmooth) def test_spectral_interpolate(): test_wcs_1 = WCS(naxis=1) test_wcs_1.wcs.cunit[0] = 'GHz' spec = OneDSpectrum(np.arange(12)*u.Jy, wcs=test_wcs_1) new_xaxis = test_wcs_1.wcs_pix2world(np.linspace(0,11,23), 0)[0] * u.Unit(test_wcs_1.wcs.cunit[0]) new_spec = spec.spectral_interpolate(new_xaxis) np.testing.assert_allclose(new_spec, np.linspace(0,11,23)*u.Jy) def test_spectral_interpolate_with_mask(data_522_delta, use_dask): hdu = fits.open(data_522_delta)[0] # Swap the velocity axis so indiff < 0 in spectral_interpolate hdu.header["CDELT3"] = - hdu.header["CDELT3"] cube = SpectralCube.read(hdu, use_dask=use_dask) mask = np.ones(cube.shape, dtype=bool) mask[:2] = False masked_cube = cube.with_mask(mask) spec = masked_cube[:, 0, 0] # midpoint between each position sg = (spec.spectral_axis[1:] + spec.spectral_axis[:-1])/2. result = spec.spectral_interpolate(spectral_grid=sg[::-1]) # The output makes CDELT3 > 0 (reversed spectral axis) so the masked # portion are the final 2 channels. np.testing.assert_almost_equal(result.filled_data[:].value, [0.0, 0.5, np.NaN, np.NaN]) def test_spectral_interpolate_reversed(data_522_delta, use_dask): cube, data = cube_and_raw(data_522_delta, use_dask=use_dask) # Reverse spectral axis sg = cube.spectral_axis[::-1] spec = cube[:, 0, 0] result = spec.spectral_interpolate(spectral_grid=sg) np.testing.assert_almost_equal(sg.value, result.spectral_axis.value) def test_spectral_interpolate_with_fillvalue(data_522_delta, use_dask): cube, data = cube_and_raw(data_522_delta, use_dask=use_dask) # Step one channel out of bounds. sg = ((cube.spectral_axis[0]) - (cube.spectral_axis[1] - cube.spectral_axis[0]) * np.linspace(1,4,4)) spec = cube[:, 0, 0] result = spec.spectral_interpolate(spectral_grid=sg, fill_value=42) np.testing.assert_almost_equal(result.value, np.ones(4)*42) def test_spectral_units(data_255_delta, use_dask): # regression test for issue 391 cube, data = cube_and_raw(data_255_delta, use_dask=use_dask) sp = cube[:,0,0] assert sp.spectral_axis.unit == u.km/u.s assert sp.header['CUNIT1'] == 'km s-1' sp = cube.with_spectral_unit(u.m/u.s)[:,0,0] assert sp.spectral_axis.unit == u.m/u.s assert sp.header['CUNIT1'] in ('m s-1', 'm/s') def test_repr_1d(data_255_delta, use_dask): cube, data = cube_and_raw(data_255_delta, use_dask=use_dask) sp = cube[:,0,0] print(sp) print(sp[1:-1]) assert 'OneDSpectrum' in sp.__repr__() assert 'OneDSpectrum' in sp[1:-1].__repr__() def test_1d_slices(data_255_delta, use_dask): cube, data = cube_and_raw(data_255_delta, use_dask=use_dask) sp = cube[:,0,0] assert sp.max() == cube.max(axis=0)[0,0] assert not isinstance(sp.max(), OneDSpectrum) sp = cube[:-1,0,0] assert sp.max() == cube[:-1,:,:].max(axis=0)[0,0] assert not isinstance(sp.max(), OneDSpectrum) @pytest.mark.parametrize('method', ('min', 'max', 'std', 'mean', 'sum', 'cumsum', 'nansum', 'ptp', 'var'), ) def test_1d_slice_reductions(method, data_255_delta, use_dask): cube, data = cube_and_raw(data_255_delta, use_dask=use_dask) sp = cube[:,0,0] if hasattr(cube, method): assert getattr(sp, method)() == getattr(cube, method)(axis=0)[0,0] else: getattr(sp, method)() assert hasattr(sp, '_fill_value') assert 'OneDSpectrum' in sp.__repr__() assert 'OneDSpectrum' in sp[1:-1].__repr__() def test_1d_slice_round(data_255_delta, use_dask): cube, data = cube_and_raw(data_255_delta, use_dask=use_dask) sp = cube[:,0,0] assert all(sp.value.round() == sp.round().value) assert hasattr(sp, '_fill_value') assert hasattr(sp.round(), '_fill_value') assert 'OneDSpectrum' in sp.round().__repr__() assert 'OneDSpectrum' in sp[1:-1].round().__repr__() def test_LDO_arithmetic(data_vda, use_dask): cube, data = cube_and_raw(data_vda, use_dask=use_dask) sp = cube[:,0,0] spx2 = sp * 2 assert np.all(spx2.value == sp.value*2) assert np.all(spx2.filled_data[:].value == sp.value*2) def test_beam_jtok_2D(data_advs, use_dask): cube, data = cube_and_raw(data_advs, use_dask=use_dask) cube._meta['BUNIT'] = 'Jy / beam' cube._unit = u.Jy / u.beam plane = cube[0] freq = cube.with_spectral_unit(u.GHz).spectral_axis[0] equiv = plane.beam.jtok_equiv(freq) jtok = plane.beam.jtok(freq) Kplane = plane.to(u.K, equivalencies=equiv, freq=freq) np.testing.assert_almost_equal(Kplane.value, (plane.value * jtok).value) # test that the beam equivalencies are correctly automatically defined Kplane = plane.to(u.K, freq=freq) np.testing.assert_almost_equal(Kplane.value, (plane.value * jtok).value) bunits_list = [u.Jy / u.beam, u.K, u.Jy / u.sr, u.Jy / u.pix, u.Jy / u.arcsec**2, u.mJy / u.beam, u.mK] @pytest.mark.parametrize(('init_unit'), bunits_list) def test_unit_conversions_general_2D(data_advs, use_dask, init_unit): cube, data = cube_and_raw(data_advs, use_dask=use_dask) cube._meta['BUNIT'] = init_unit.to_string() cube._unit = init_unit plane = cube[0] # Check all unit conversion combos: for targ_unit in bunits_list: newplane = plane.to(targ_unit) if init_unit == targ_unit: np.testing.assert_almost_equal(newplane.value, plane.value) else: roundtrip_plane = newplane.to(init_unit) np.testing.assert_almost_equal(roundtrip_plane.value, plane.value) # TODO: Our 1D object do NOT retain spatial info that is needed for other BUNIT conversion # e.g., Jy/sr, Jy/pix. So we're limited to Jy/beam -> K conversion for now # See: https://github.com/radio-astro-tools/spectral-cube/pull/395 bunits_list_1D = [u.Jy / u.beam, u.K, u.mJy / u.beam, u.mK] @pytest.mark.parametrize(('init_unit'), bunits_list_1D) def test_unit_conversions_general_1D(data_advs, use_dask, init_unit): cube, data = cube_and_raw(data_advs, use_dask=use_dask) cube._meta['BUNIT'] = init_unit.to_string() cube._unit = init_unit spec = cube[:, 0, 0] # Check all unit conversion combos: for targ_unit in bunits_list_1D: newspec = spec.to(targ_unit) if init_unit == targ_unit: np.testing.assert_almost_equal(newspec.value, spec.value) else: roundtrip_spec = newspec.to(init_unit) np.testing.assert_almost_equal(roundtrip_spec.value, spec.value) @pytest.mark.parametrize(('init_unit'), bunits_list_1D) def test_multibeams_unit_conversions_general_1D(data_vda_beams, use_dask, init_unit): cube, data = cube_and_raw(data_vda_beams, use_dask=use_dask) cube._meta['BUNIT'] = init_unit.to_string() cube._unit = init_unit spec = cube[:, 0, 0] # Check all unit conversion combos: for targ_unit in bunits_list_1D: newspec = spec.to(targ_unit) if init_unit == targ_unit: np.testing.assert_almost_equal(newspec.value, spec.value) else: roundtrip_spec = newspec.to(init_unit) np.testing.assert_almost_equal(roundtrip_spec.value, spec.value) def test_basic_arrayness(data_adv, use_dask): cube, data = cube_and_raw(data_adv, use_dask=use_dask) assert cube.shape == data.shape spec = cube[:,0,0] assert np.all(np.asanyarray(spec).value == data[:,0,0]) assert np.all(np.array(spec) == data[:,0,0]) assert np.all(np.asarray(spec) == data[:,0,0]) # These are commented out because it is presently not possible to convert # projections to masked arrays # assert np.all(np.ma.asanyarray(spec).value == data[:,0,0]) # assert np.all(np.ma.asarray(spec) == data[:,0,0]) # assert np.all(np.ma.array(spec) == data[:,0,0]) slc = cube[0,:,:] assert np.all(np.asanyarray(slc).value == data[0,:,:]) assert np.all(np.array(slc) == data[0,:,:]) assert np.all(np.asarray(slc) == data[0,:,:]) # assert np.all(np.ma.asanyarray(slc).value == data[0,:,:]) # assert np.all(np.ma.asarray(slc) == data[0,:,:]) # assert np.all(np.ma.array(slc) == data[0,:,:]) def test_spatial_world_extrema_2D(data_522_delta, use_dask): hdu = fits.open(data_522_delta)[0] cube = SpectralCube.read(hdu, use_dask=use_dask) plane = cube[0] assert (cube.world_extrema == plane.world_extrema).all() assert (cube.longitude_extrema == plane.longitude_extrema).all() assert (cube.latitude_extrema == plane.latitude_extrema).all() @pytest.mark.parametrize('view', (np.s_[:, :], np.s_[::2, :], np.s_[0])) def test_spatial_world(view, data_adv, use_dask): p = path(data_adv) # d = fits.getdata(p) # wcs = WCS(p) # c = SpectralCube(d, wcs) c = SpectralCube.read(p, use_dask=use_dask) plane = c[0] wcs = plane.wcs shp = plane.shape inds = np.indices(plane.shape) pix = np.column_stack([i.ravel() for i in inds[::-1]]) world = wcs.all_pix2world(pix, 0).T world = [w.reshape(shp) for w in world] world = [w[view] * u.Unit(wcs.wcs.cunit[i]) for i, w in enumerate(world)][::-1] w2 = plane.world[view] for result, expected in zip(w2, world): assert_allclose(result, expected) # Test world_flattened here, too # TODO: Enable once 2D masking is a thing w2_flat = plane.flattened_world(view=view) for result, expected in zip(w2_flat, world): print(result.shape, expected.flatten().shape) assert_allclose(result, expected.flatten()) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1633017852.0 spectral-cube-0.6.0/spectral_cube/tests/test_regrid.py0000644000175100001710000004424200000000000022456 0ustar00runnerdockerimport sys import pytest import tempfile import numpy as np from astropy import units as u from astropy import convolution from astropy.wcs import WCS from astropy import wcs from astropy.io import fits try: import tracemalloc tracemallocOK = True except ImportError: tracemallocOK = False # The comparison of Quantities in test_memory_usage # fail with older versions of numpy from distutils.version import LooseVersion NPY_VERSION_CHECK = LooseVersion(np.version.version) >= "1.13" from radio_beam import beam, Beam from .. import SpectralCube from ..utils import WCSCelestialError from .test_spectral_cube import cube_and_raw from .test_projection import load_projection from . import path, utilities WINDOWS = sys.platform == "win32" def test_convolution(data_255_delta, use_dask): cube, data = cube_and_raw(data_255_delta, use_dask=use_dask) # 1" convolved with 1.5" -> 1.8027.... target_beam = Beam(1.802775637731995*u.arcsec, 1.802775637731995*u.arcsec, 0*u.deg) conv_cube = cube.convolve_to(target_beam) expected = convolution.Gaussian2DKernel((1.5*u.arcsec / beam.SIGMA_TO_FWHM / (5.555555555555e-4*u.deg)).decompose().value, x_size=5, y_size=5, ) expected.normalize() np.testing.assert_almost_equal(expected.array, conv_cube.filled_data[0,:,:].value) # 2nd layer is all zeros assert np.all(conv_cube.filled_data[1,:,:] == 0.0) def test_beams_convolution(data_455_delta_beams, use_dask): cube, data = cube_and_raw(data_455_delta_beams, use_dask=use_dask) # 1" convolved with 1.5" -> 1.8027.... target_beam = Beam(1.802775637731995*u.arcsec, 1.802775637731995*u.arcsec, 0*u.deg) conv_cube = cube.convolve_to(target_beam) pixscale = wcs.utils.proj_plane_pixel_area(cube.wcs.celestial)**0.5*u.deg for ii, bm in enumerate(cube.beams): expected = target_beam.deconvolve(bm).as_kernel(pixscale, x_size=5, y_size=5) expected.normalize() np.testing.assert_almost_equal(expected.array, conv_cube.filled_data[ii,:,:].value) def test_beams_convolution_equal(data_522_delta_beams, use_dask): cube, data = cube_and_raw(data_522_delta_beams, use_dask=use_dask) # Only checking that the equal beam case is handled correctly. # Fake the beam in the first channel. Then ensure that the first channel # has NOT been convolved. target_beam = Beam(1.0 * u.arcsec, 1.0 * u.arcsec, 0.0 * u.deg) cube.beams.major[0] = target_beam.major cube.beams.minor[0] = target_beam.minor cube.beams.pa[0] = target_beam.pa conv_cube = cube.convolve_to(target_beam) np.testing.assert_almost_equal(cube.filled_data[0].value, conv_cube.filled_data[0].value) @pytest.mark.parametrize('use_memmap', (True, False)) def test_reproject(use_memmap, data_adv, use_dask): pytest.importorskip('reproject') cube, data = cube_and_raw(data_adv, use_dask=use_dask) wcs_in = WCS(cube.header) wcs_out = wcs_in.deepcopy() wcs_out.wcs.ctype = ['GLON-SIN', 'GLAT-SIN', wcs_in.wcs.ctype[2]] wcs_out.wcs.crval = [134.37608, -31.939241, wcs_in.wcs.crval[2]] wcs_out.wcs.crpix = [2., 2., wcs_in.wcs.crpix[2]] header_out = cube.header header_out['NAXIS1'] = 4 header_out['NAXIS2'] = 5 header_out['NAXIS3'] = cube.shape[0] header_out.update(wcs_out.to_header()) result = cube.reproject(header_out, use_memmap=use_memmap) assert result.shape == (cube.shape[0], 5, 4) # Check WCS in reprojected matches wcs_out assert wcs_out.wcs.compare(result.wcs.wcs) # And that the headers have equivalent WCS info. result_wcs_from_header = WCS(result.header) assert result_wcs_from_header.wcs.compare(wcs_out.wcs) def test_spectral_smooth(data_522_delta, use_dask): cube, data = cube_and_raw(data_522_delta, use_dask=use_dask) result = cube.spectral_smooth(kernel=convolution.Gaussian1DKernel(1.0), use_memmap=False) np.testing.assert_almost_equal(result[:,0,0].value, convolution.Gaussian1DKernel(1.0, x_size=5).array, 4) result = cube.spectral_smooth(kernel=convolution.Gaussian1DKernel(1.0), use_memmap=True) np.testing.assert_almost_equal(result[:,0,0].value, convolution.Gaussian1DKernel(1.0, x_size=5).array, 4) def test_catch_kernel_with_units(data_522_delta, use_dask): # Passing a kernel with a unit should raise a u.UnitsError cube, data = cube_and_raw(data_522_delta, use_dask=use_dask) with pytest.raises(u.UnitsError, match="The convolution kernel should be defined without a unit."): cube.spectral_smooth(kernel=convolution.Gaussian1DKernel(1.0 * u.one), use_memmap=False) @pytest.mark.skipif('WINDOWS') def test_spectral_smooth_4cores(data_522_delta): pytest.importorskip('joblib') cube, data = cube_and_raw(data_522_delta, use_dask=False) result = cube.spectral_smooth(kernel=convolution.Gaussian1DKernel(1.0), num_cores=4, use_memmap=True) np.testing.assert_almost_equal(result[:,0,0].value, convolution.Gaussian1DKernel(1.0, x_size=5).array, 4) # this is one way to test non-parallel mode result = cube.spectral_smooth(kernel=convolution.Gaussian1DKernel(1.0), num_cores=4, use_memmap=False) np.testing.assert_almost_equal(result[:,0,0].value, convolution.Gaussian1DKernel(1.0, x_size=5).array, 4) # num_cores = 4 is a contradiction with parallel=False, so we want to make # sure it fails with pytest.raises(ValueError, match=("parallel execution was not requested, but " "multiple cores were: these are incompatible " "options. Either specify num_cores=1 or " "parallel=True")): result = cube.spectral_smooth(kernel=convolution.Gaussian1DKernel(1.0), num_cores=4, parallel=False) np.testing.assert_almost_equal(result[:,0,0].value, convolution.Gaussian1DKernel(1.0, x_size=5).array, 4) def test_spectral_smooth_fail(data_522_delta_beams, use_dask): cube, data = cube_and_raw(data_522_delta_beams, use_dask=use_dask) with pytest.raises(AttributeError, match=("VaryingResolutionSpectralCubes can't be " "spectrally smoothed. Convolve to a " "common resolution with `convolve_to` before " "attempting spectral smoothed.")): cube.spectral_smooth(kernel=convolution.Gaussian1DKernel(1.0)) def test_spectral_interpolate(data_522_delta, use_dask): cube, data = cube_and_raw(data_522_delta, use_dask=use_dask) orig_wcs = cube.wcs.deepcopy() # midpoint between each position sg = (cube.spectral_axis[1:] + cube.spectral_axis[:-1])/2. result = cube.spectral_interpolate(spectral_grid=sg) np.testing.assert_almost_equal(result[:,0,0].value, [0.0, 0.5, 0.5, 0.0]) assert cube.wcs.wcs.compare(orig_wcs.wcs) def test_spectral_interpolate_with_fillvalue(data_522_delta, use_dask): cube, data = cube_and_raw(data_522_delta, use_dask=use_dask) # Step one channel out of bounds. sg = ((cube.spectral_axis[0]) - (cube.spectral_axis[1] - cube.spectral_axis[0]) * np.linspace(1,4,4)) result = cube.spectral_interpolate(spectral_grid=sg, fill_value=42) np.testing.assert_almost_equal(result[:,0,0].value, np.ones(4)*42) def test_spectral_interpolate_fail(data_522_delta_beams, use_dask): cube, data = cube_and_raw(data_522_delta_beams, use_dask=use_dask) with pytest.raises(AttributeError, match=("VaryingResolutionSpectralCubes can't be " "spectrally interpolated. Convolve to a " "common resolution with `convolve_to` before " "attempting spectral interpolation.")): cube.spectral_interpolate(5) def test_spectral_interpolate_with_mask(data_522_delta, use_dask): hdul = fits.open(data_522_delta) hdu = hdul[0] # Swap the velocity axis so indiff < 0 in spectral_interpolate hdu.header["CDELT3"] = - hdu.header["CDELT3"] cube = SpectralCube.read(hdu, use_dask=use_dask) mask = np.ones(cube.shape, dtype=bool) mask[:2] = False masked_cube = cube.with_mask(mask) orig_wcs = cube.wcs.deepcopy() # midpoint between each position sg = (cube.spectral_axis[1:] + cube.spectral_axis[:-1])/2. result = masked_cube.spectral_interpolate(spectral_grid=sg[::-1]) # The output makes CDELT3 > 0 (reversed spectral axis) so the masked # portion are the final 2 channels. np.testing.assert_almost_equal(result[:, 0, 0].value, [0.0, 0.5, np.NaN, np.NaN]) assert cube.wcs.wcs.compare(orig_wcs.wcs) hdul.close() def test_spectral_interpolate_reversed(data_522_delta, use_dask): cube, data = cube_and_raw(data_522_delta, use_dask=use_dask) orig_wcs = cube.wcs.deepcopy() # Reverse spectral axis sg = cube.spectral_axis[::-1] result = cube.spectral_interpolate(spectral_grid=sg) np.testing.assert_almost_equal(sg.value, result.spectral_axis.value) def test_convolution_2D(data_55_delta): proj, hdu = load_projection(data_55_delta) # 1" convolved with 1.5" -> 1.8027.... target_beam = Beam(1.802775637731995*u.arcsec, 1.802775637731995*u.arcsec, 0*u.deg) conv_proj = proj.convolve_to(target_beam) expected = convolution.Gaussian2DKernel((1.5*u.arcsec / beam.SIGMA_TO_FWHM / (5.555555555555e-4*u.deg)).decompose().value, x_size=5, y_size=5, ) expected.normalize() np.testing.assert_almost_equal(expected.array, conv_proj.value) assert conv_proj.beam == target_beam # Pass a kwarg to the convolution function conv_proj = proj.convolve_to(target_beam, nan_treatment='fill') def test_nocelestial_convolution_2D_fail(data_255_delta, use_dask): cube, data = cube_and_raw(data_255_delta, use_dask=use_dask) proj = cube.moment0(axis=1) test_beam = Beam(1.0 * u.arcsec) with pytest.raises(WCSCelestialError, match="WCS does not contain two spatial axes."): proj.convolve_to(test_beam) def test_reproject_2D(data_55): pytest.importorskip('reproject') proj, hdu = load_projection(data_55) wcs_in = WCS(proj.header) wcs_out = wcs_in.deepcopy() wcs_out.wcs.ctype = ['GLON-SIN', 'GLAT-SIN'] wcs_out.wcs.crval = [134.37608, -31.939241] wcs_out.wcs.crpix = [2., 2.] header_out = proj.header header_out['NAXIS1'] = 4 header_out['NAXIS2'] = 5 header_out.update(wcs_out.to_header()) result = proj.reproject(header_out) assert result.shape == (5, 4) assert result.beam == proj.beam # Check WCS in reprojected matches wcs_out assert wcs_out.wcs.compare(result.wcs.wcs) # And that the headers have equivalent WCS info. result_wcs_from_header = WCS(result.header) assert result_wcs_from_header.wcs.compare(wcs_out.wcs) def test_nocelestial_reproject_2D_fail(data_255_delta, use_dask): pytest.importorskip('reproject') cube, data = cube_and_raw(data_255_delta, use_dask=use_dask) proj = cube.moment0(axis=1) with pytest.raises(WCSCelestialError, match="WCS does not contain two spatial axes."): proj.reproject(cube.header) @pytest.mark.parametrize('use_memmap', (True,False)) def test_downsample(use_memmap, data_255): # FIXME: this test should be updated to use the use_dask fixture once # DaskSpectralCube.downsample_axis is fixed. cube, data = cube_and_raw(data_255, use_dask=False) dscube = cube.downsample_axis(factor=2, axis=0, use_memmap=use_memmap) expected = data.mean(axis=0) np.testing.assert_almost_equal(expected[None,:,:], dscube.filled_data[:].value) dscube = cube.downsample_axis(factor=2, axis=1, use_memmap=use_memmap) expected = np.array([data[:,:2,:].mean(axis=1), data[:,2:4,:].mean(axis=1), data[:,4:,:].mean(axis=1), # just data[:,4,:] ]).swapaxes(0,1) assert expected.shape == (2,3,5) assert dscube.shape == (2,3,5) np.testing.assert_almost_equal(expected, dscube.filled_data[:].value) dscube = cube.downsample_axis(factor=2, axis=1, truncate=True, use_memmap=use_memmap) expected = np.array([data[:,:2,:].mean(axis=1), data[:,2:4,:].mean(axis=1), ]).swapaxes(0,1) np.testing.assert_almost_equal(expected, dscube.filled_data[:].value) @pytest.mark.parametrize('use_memmap', (True,False)) def test_downsample_wcs(use_memmap, data_255): # FIXME: this test should be updated to use the use_dask fixture once # DaskSpectralCube.downsample_axis is fixed. cube, data = cube_and_raw(data_255, use_dask=False) dscube = (cube .downsample_axis(factor=2, axis=1, use_memmap=use_memmap) .downsample_axis(factor=2, axis=2, use_memmap=use_memmap)) # pixel [0,0] in the new cube should have coordinate [1,1] in the old cube lonnew, latnew = dscube.wcs.celestial.wcs_pix2world(0, 0, 0) xpixold_ypixold = np.array(cube.wcs.celestial.wcs_world2pix(lonnew, latnew, 0)) np.testing.assert_almost_equal(xpixold_ypixold, (0.5, 0.5)) # the center of the bottom-left pixel, in FITS coordinates, in the # original frame will now be at -0.25, -0.25 in the new frame lonold, latold = cube.wcs.celestial.wcs_pix2world(1, 1, 1) xpixnew_ypixnew = np.array(dscube.wcs.celestial.wcs_world2pix(lonold, latold, 1)) np.testing.assert_almost_equal(xpixnew_ypixnew, (0.75, 0.75)) @pytest.mark.skipif('not tracemallocOK or (sys.version_info.major==3 and sys.version_info.minor<6) or not NPY_VERSION_CHECK') def test_reproject_3D_memory(): pytest.importorskip('reproject') tracemalloc.start() snap1 = tracemalloc.take_snapshot() # create a 64 MB cube cube,_ = utilities.generate_gaussian_cube(shape=[200,200,200]) sz = _.dtype.itemsize # check that cube is loaded into memory snap2 = tracemalloc.take_snapshot() diff = snap2.compare_to(snap1, 'lineno') diffvals = np.array([dd.size_diff for dd in diff]) # at this point, the generated cube should still exist in memory assert diffvals.max()*u.B >= 200**3*sz*u.B wcs_in = cube.wcs wcs_out = wcs_in.deepcopy() wcs_out.wcs.ctype = ['GLON-SIN', 'GLAT-SIN', cube.wcs.wcs.ctype[2]] wcs_out.wcs.crval = [0.001, 0.001, cube.wcs.wcs.crval[2]] wcs_out.wcs.crpix = [2., 2., cube.wcs.wcs.crpix[2]] header_out = (wcs_out.to_header()) header_out['NAXIS'] = 3 header_out['NAXIS1'] = int(cube.shape[2]/2) header_out['NAXIS2'] = int(cube.shape[1]/2) header_out['NAXIS3'] = cube.shape[0] # First the unfilled reprojection test: new memory is allocated for # `result`, but nowhere else result = cube.reproject(header_out, filled=False) snap3 = tracemalloc.take_snapshot() diff = snap3.compare_to(snap2, 'lineno') diffvals = np.array([dd.size_diff for dd in diff]) # result should have the same size as the input data, except smaller in two dims # make sure that's all that's allocated assert diffvals.max()*u.B >= 200*100**2*sz*u.B assert diffvals.max()*u.B < 200*110**2*sz*u.B # without masking the cube, nothing should change result = cube.reproject(header_out, filled=True) snap4 = tracemalloc.take_snapshot() diff = snap4.compare_to(snap3, 'lineno') diffvals = np.array([dd.size_diff for dd in diff]) assert diffvals.max()*u.B <= 1*u.MB assert result.wcs.wcs.crval[0] == 0.001 assert result.wcs.wcs.crpix[0] == 2. # masking the cube will force the fill to create a new in-memory copy mcube = cube.with_mask(cube > 0.1*cube.unit) # `_is_huge` would trigger a use_memmap assert not mcube._is_huge assert mcube.mask.any() # take a new snapshot because we're not testing the mask creation snap5 = tracemalloc.take_snapshot() tracemalloc.stop() tracemalloc.start() # stop/start so we can check peak mem use from here current_b4, peak_b4 = tracemalloc.get_traced_memory() result = mcube.reproject(header_out, filled=True) current_aftr, peak_aftr = tracemalloc.get_traced_memory() snap6 = tracemalloc.take_snapshot() diff = snap6.compare_to(snap5, 'lineno') diffvals = np.array([dd.size_diff for dd in diff]) # a duplicate of the cube should have been created by filling masked vals # (this should be near-exact since 'result' should occupy exactly the # same amount of memory) assert diffvals.max()*u.B <= 1*u.MB #>= 200**3*sz*u.B # the peak memory usage *during* reprojection will have that duplicate, # but the memory gets cleaned up afterward assert (peak_aftr-peak_b4)*u.B >= (200**3*sz*u.B + 200*100**2*sz*u.B) assert result.wcs.wcs.crval[0] == 0.001 assert result.wcs.wcs.crpix[0] == 2. ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1633017852.0 spectral-cube-0.6.0/spectral_cube/tests/test_spectral_axis.py0000644000175100001710000006224700000000000024050 0ustar00runnerdockerfrom __future__ import print_function, absolute_import, division from astropy import wcs from astropy.io import fits from astropy import units as u from astropy import constants from astropy.tests.helper import pytest, assert_quantity_allclose import numpy as np from .helpers import assert_allclose from . import path as data_path from ..spectral_axis import (convert_spectral_axis, determine_ctype_from_vconv, cdelt_derivative, determine_vconv_from_ctype, get_rest_value_from_wcs, air_to_vac, air_to_vac_deriv, vac_to_air, doppler_z, doppler_gamma, doppler_beta) def test_cube_wcs_freqtovel(): header = fits.Header.fromtextfile(data_path('cubewcs1.hdr')) w1 = wcs.WCS(header) # CTYPE3 = 'FREQ' newwcs = convert_spectral_axis(w1, 'km/s', 'VRAD', rest_value=w1.wcs.restfrq*u.Hz) assert newwcs.wcs.ctype[2] == 'VRAD' assert newwcs.wcs.crval[2] == 305.2461585938794 assert newwcs.wcs.cunit[2] == u.Unit('km/s') newwcs = convert_spectral_axis(w1, 'km/s', 'VRAD') assert newwcs.wcs.ctype[2] == 'VRAD' assert newwcs.wcs.crval[2] == 305.2461585938794 assert newwcs.wcs.cunit[2] == u.Unit('km/s') def test_cube_wcs_freqtovopt(): header = fits.Header.fromtextfile(data_path('cubewcs1.hdr')) w1 = wcs.WCS(header) w2 = convert_spectral_axis(w1, 'km/s', 'VOPT') # TODO: what should w2's values be? test them # these need to be set to zero to test the failure w1.wcs.restfrq = 0.0 w1.wcs.restwav = 0.0 with pytest.raises(ValueError) as exc: convert_spectral_axis(w1, 'km/s', 'VOPT') assert exc.value.args[0] == 'If converting from wavelength/frequency to speed, a reference wavelength/frequency is required.' @pytest.mark.parametrize('wcstype',('Z','W','R','V')) def test_greisen2006(wcstype): # This is the header extracted from Greisen 2006, including many examples # of valid transforms. It should be the gold standard (in principle) hdr = fits.Header.fromtextfile(data_path('greisen2006.hdr')) # We have not implemented frame conversions, so we can only convert bary # <-> bary in this case wcs0 = wcs.WCS(hdr, key='F') wcs1 = wcs.WCS(hdr, key=wcstype) if wcstype in ('R','V','Z'): if wcs1.wcs.restfrq: rest = wcs1.wcs.restfrq*u.Hz elif wcs1.wcs.restwav: rest = wcs1.wcs.restwav*u.m else: rest = None outunit = u.Unit(wcs1.wcs.cunit[wcs1.wcs.spec]) out_ctype = wcs1.wcs.ctype[wcs1.wcs.spec] wcs2 = convert_spectral_axis(wcs0, outunit, out_ctype, rest_value=rest) assert_allclose(wcs2.wcs.cdelt[wcs2.wcs.spec], wcs1.wcs.cdelt[wcs1.wcs.spec], rtol=1.e-3) assert_allclose(wcs2.wcs.crval[wcs2.wcs.spec], wcs1.wcs.crval[wcs1.wcs.spec], rtol=1.e-3) assert wcs2.wcs.ctype[wcs2.wcs.spec] == wcs1.wcs.ctype[wcs1.wcs.spec] assert wcs2.wcs.cunit[wcs2.wcs.spec] == wcs1.wcs.cunit[wcs1.wcs.spec] # round trip test: inunit = u.Unit(wcs0.wcs.cunit[wcs0.wcs.spec]) in_ctype = wcs0.wcs.ctype[wcs0.wcs.spec] wcs3 = convert_spectral_axis(wcs2, inunit, in_ctype, rest_value=rest) assert_allclose(wcs3.wcs.crval[wcs3.wcs.spec], wcs0.wcs.crval[wcs0.wcs.spec], rtol=1.e-3) assert_allclose(wcs3.wcs.cdelt[wcs3.wcs.spec], wcs0.wcs.cdelt[wcs0.wcs.spec], rtol=1.e-3) assert wcs3.wcs.ctype[wcs3.wcs.spec] == wcs0.wcs.ctype[wcs0.wcs.spec] assert wcs3.wcs.cunit[wcs3.wcs.spec] == wcs0.wcs.cunit[wcs0.wcs.spec] def test_byhand_f2v(): # VELO-F2V CRVAL3F = 1.37847121643E+09 CDELT3F = 9.764775E+04 RESTFRQV= 1.420405752E+09 CRVAL3V = 8.98134229811E+06 CDELT3V = -2.1217551E+04 CUNIT3V = 'm/s' CUNIT3F = 'Hz' crvalf = CRVAL3F * u.Unit(CUNIT3F) crvalv = CRVAL3V * u.Unit(CUNIT3V) restfreq = RESTFRQV * u.Unit(CUNIT3F) cdeltf = CDELT3F * u.Unit(CUNIT3F) cdeltv = CDELT3V * u.Unit(CUNIT3V) # (Pdb) crval_in,crval_lin1,crval_lin2,crval_out # (, , , ) (Pdb) # cdelt_in, cdelt_lin1, cdelt_lin2, cdelt_out # (, , , ) crvalv_computed = crvalf.to(CUNIT3V, u.doppler_relativistic(restfreq)) cdeltv_computed = -4*constants.c*cdeltf*crvalf*restfreq**2 / (crvalf**2+restfreq**2)**2 cdeltv_computed_byfunction = cdelt_derivative(crvalf, cdeltf, intype='frequency', outtype='speed', rest=restfreq) # this should be EXACT assert cdeltv_computed == cdeltv_computed_byfunction assert_allclose(crvalv_computed, crvalv, rtol=1.e-3) assert_allclose(cdeltv_computed, cdeltv, rtol=1.e-3) # round trip # (Pdb) crval_in,crval_lin1,crval_lin2,crval_out # (, , # , ) # (Pdb) cdelt_in, cdelt_lin1, cdelt_lin2, cdelt_out # (, , , ) crvalf_computed = crvalv_computed.to(CUNIT3F, u.doppler_relativistic(restfreq)) cdeltf_computed = -(cdeltv_computed * constants.c * restfreq / ((constants.c+crvalv_computed)*(constants.c**2 - crvalv_computed**2)**0.5)) assert_allclose(crvalf_computed, crvalf, rtol=1.e-2) assert_allclose(cdeltf_computed, cdeltf, rtol=1.e-2) cdeltf_computed_byfunction = cdelt_derivative(crvalv_computed, cdeltv_computed, intype='speed', outtype='frequency', rest=restfreq) # this should be EXACT assert cdeltf_computed == cdeltf_computed_byfunction def test_byhand_vrad(): # VRAD CRVAL3F = 1.37847121643E+09 CDELT3F = 9.764775E+04 RESTFRQR= 1.420405752E+09 CRVAL3R = 8.85075090419E+06 CDELT3R = -2.0609645E+04 CUNIT3R = 'm/s' CUNIT3F = 'Hz' crvalf = CRVAL3F * u.Unit(CUNIT3F) crvalv = CRVAL3R * u.Unit(CUNIT3R) restfreq = RESTFRQR * u.Unit(CUNIT3F) cdeltf = CDELT3F * u.Unit(CUNIT3F) cdeltv = CDELT3R * u.Unit(CUNIT3R) # (Pdb) crval_in,crval_lin1,crval_lin2,crval_out # (, , , ) # (Pdb) cdelt_in, cdelt_lin1, cdelt_lin2, cdelt_out # (, , , ) crvalv_computed = crvalf.to(CUNIT3R, u.doppler_radio(restfreq)) cdeltv_computed = -(cdeltf / restfreq)*constants.c assert_allclose(crvalv_computed, crvalv, rtol=1.e-3) assert_allclose(cdeltv_computed, cdeltv, rtol=1.e-3) crvalf_computed = crvalv_computed.to(CUNIT3F, u.doppler_radio(restfreq)) cdeltf_computed = -(cdeltv_computed/constants.c) * restfreq assert_allclose(crvalf_computed, crvalf, rtol=1.e-3) assert_allclose(cdeltf_computed, cdeltf, rtol=1.e-3) # round trip: # (Pdb) crval_in,crval_lin1,crval_lin2,crval_out # (, , , ) # (Pdb) cdelt_in, cdelt_lin1, cdelt_lin2, cdelt_out # (, , , ) # (Pdb) myunit,lin_cunit,out_lin_cunit,outunit # WRONG (Unit("m / s"), Unit("m / s"), Unit("Hz"), Unit("Hz")) def test_byhand_vopt(): # VOPT: case "Z" CRVAL3F = 1.37847121643E+09 CDELT3F = 9.764775E+04 CUNIT3F = 'Hz' RESTWAVZ= 0.211061139 #CTYPE3Z = 'VOPT-F2W' # This comes from Greisen 2006, but appears to be wrong: CRVAL3Z = 9.120000E+06 CRVAL3Z = 9.120002206E+06 CDELT3Z = -2.1882651E+04 CUNIT3Z = 'm/s' crvalf = CRVAL3F * u.Unit(CUNIT3F) crvalv = CRVAL3Z * u.Unit(CUNIT3Z) restwav = RESTWAVZ * u.m cdeltf = CDELT3F * u.Unit(CUNIT3F) cdeltv = CDELT3Z * u.Unit(CUNIT3Z) # Forward: freq -> vopt # crval: (, , , ) # cdelt: (, , , ) #crvalv_computed = crvalf.to(CUNIT3R, u.doppler_radio(restwav)) crvalw_computed = crvalf.to(u.m, u.spectral()) crvalw_computed32 = crvalf.astype('float32').to(u.m, u.spectral()) cdeltw_computed = -(cdeltf / crvalf**2)*constants.c cdeltw_computed_byfunction = cdelt_derivative(crvalf, cdeltf, intype='frequency', outtype='length', rest=None) # this should be EXACT assert cdeltw_computed == cdeltw_computed_byfunction crvalv_computed = crvalw_computed.to(CUNIT3Z, u.doppler_optical(restwav)) crvalv_computed32 = crvalw_computed32.astype('float32').to(CUNIT3Z, u.doppler_optical(restwav)) #cdeltv_computed = (cdeltw_computed * # 4*constants.c*crvalw_computed*restwav**2 / # (restwav**2+crvalw_computed**2)**2) cdeltv_computed = (cdeltw_computed / restwav)*constants.c cdeltv_computed_byfunction = cdelt_derivative(crvalw_computed, cdeltw_computed, intype='length', outtype='speed', rest=restwav, linear=True) # Disagreement is 2.5e-7: good, but not really great... #assert np.abs((crvalv_computed-crvalv)/crvalv) < 1e-6 assert_allclose(crvalv_computed, crvalv, rtol=1.e-2) assert_allclose(cdeltv_computed, cdeltv, rtol=1.e-2) # Round=trip test: # from velo_opt -> freq # (, , , ) # (, , , ) crvalw_computed = crvalv_computed.to(u.m, u.doppler_optical(restwav)) cdeltw_computed = (cdeltv_computed/constants.c) * restwav cdeltw_computed_byfunction = cdelt_derivative(crvalv_computed, cdeltv_computed, intype='speed', outtype='length', rest=restwav, linear=True) assert cdeltw_computed == cdeltw_computed_byfunction crvalf_computed = crvalw_computed.to(CUNIT3F, u.spectral()) cdeltf_computed = -cdeltw_computed * constants.c / crvalw_computed**2 assert_allclose(crvalf_computed, crvalf, rtol=1.e-3) assert_allclose(cdeltf_computed, cdeltf, rtol=1.e-3) cdeltf_computed_byfunction = cdelt_derivative(crvalw_computed, cdeltw_computed, intype='length', outtype='frequency', rest=None) assert cdeltf_computed == cdeltf_computed_byfunction # Fails intentionally (but not really worth testing) #crvalf_computed = crvalv_computed.to(CUNIT3F, u.spectral()+u.doppler_optical(restwav)) #cdeltf_computed = -(cdeltv_computed / constants.c) * restwav.to(u.Hz, u.spectral()) #assert_allclose(crvalf_computed, crvalf, rtol=1.e-3) #assert_allclose(cdeltf_computed, cdeltf, rtol=1.e-3) def test_byhand_f2w(): CRVAL3F = 1.37847121643E+09 CDELT3F = 9.764775E+04 CUNIT3F = 'Hz' #CTYPE3W = 'WAVE-F2W' CRVAL3W = 0.217481841062 CDELT3W = -1.5405916E-05 CUNIT3W = 'm' crvalf = CRVAL3F * u.Unit(CUNIT3F) crvalw = CRVAL3W * u.Unit(CUNIT3W) cdeltf = CDELT3F * u.Unit(CUNIT3F) cdeltw = CDELT3W * u.Unit(CUNIT3W) crvalf_computed = crvalw.to(CUNIT3F, u.spectral()) cdeltf_computed = -constants.c * cdeltw / crvalw**2 assert_allclose(crvalf_computed, crvalf, rtol=0.1) assert_allclose(cdeltf_computed, cdeltf, rtol=0.1) @pytest.mark.parametrize(('ctype','unit','velocity_convention','result'), (('VELO-F2V', "Hz", None, 'FREQ'), ('VELO-F2V', "m", None, 'WAVE-F2W'), ('VOPT', "m", None, 'WAVE'), ('VOPT', "Hz", None, 'FREQ-W2F'), ('VELO', "Hz", None, 'FREQ-V2F'), ('WAVE', "Hz", None, 'FREQ-W2F'), ('FREQ', 'm/s', None, ValueError('A velocity convention must be specified')), ('FREQ', 'm/s', u.doppler_radio, 'VRAD'), ('FREQ', 'm/s', u.doppler_optical, 'VOPT-F2W'), ('FREQ', 'm/s', u.doppler_relativistic, 'VELO-F2V'), ('WAVE', 'm/s', u.doppler_radio, 'VRAD-W2F'))) def test_ctype_determinator(ctype,unit,velocity_convention,result): if isinstance(result, Exception): with pytest.raises(Exception) as exc: determine_ctype_from_vconv(ctype, unit, velocity_convention=velocity_convention) assert exc.value.args[0] == result.args[0] assert type(exc.value) == type(result) else: outctype = determine_ctype_from_vconv(ctype, unit, velocity_convention=velocity_convention) assert outctype == result @pytest.mark.parametrize(('ctype','vconv'), (('VELO-F2W', u.doppler_optical), ('VELO-F2V', u.doppler_relativistic), ('VRAD', u.doppler_radio), ('VOPT', u.doppler_optical), ('VELO', u.doppler_relativistic), ('WAVE', u.doppler_optical), ('WAVE-F2W', u.doppler_optical), ('WAVE-V2W', u.doppler_optical), ('FREQ', u.doppler_radio), ('FREQ-V2F', u.doppler_radio), ('FREQ-W2F', u.doppler_radio),)) def test_vconv_determinator(ctype, vconv): assert determine_vconv_from_ctype(ctype) == vconv @pytest.fixture def filename(request): return request.getfixturevalue(request.param) @pytest.mark.parametrize(('filename'), (('data_advs'), ('data_dvsa'), ('data_sdav'), ('data_sadv'), ('data_vsad'), ('data_vad'), ('data_adv'), ), indirect=['filename']) def test_vopt_to_freq(filename): h = fits.getheader(filename) wcs0 = wcs.WCS(h) # check to make sure astropy.wcs's "fix" changes VELO-HEL to VOPT assert wcs0.wcs.ctype[wcs0.wcs.spec] == 'VOPT' out_ctype = determine_ctype_from_vconv('VOPT', u.Hz) wcs1 = convert_spectral_axis(wcs0, u.Hz, out_ctype) assert wcs1.wcs.ctype[wcs1.wcs.spec] == 'FREQ-W2F' @pytest.mark.parametrize('wcstype',('Z','W','R','V','F')) def test_change_rest_frequency(wcstype): # This is the header extracted from Greisen 2006, including many examples # of valid transforms. It should be the gold standard (in principle) hdr = fits.Header.fromtextfile(data_path('greisen2006.hdr')) wcs0 = wcs.WCS(hdr, key=wcstype) old_rest = get_rest_value_from_wcs(wcs0) if old_rest is None: # This test doesn't matter if there was no rest frequency in the first # place but I prefer to keep the option open in case we want to try # forcing a rest frequency on some of the non-velocity frames at some # point return vconv1 = determine_vconv_from_ctype(hdr['CTYPE3'+wcstype]) new_rest = (100*u.km/u.s).to(u.Hz, vconv1(old_rest)) wcs1 = wcs.WCS(hdr, key='V') vconv2 = determine_vconv_from_ctype(hdr['CTYPE3V']) inunit = u.Unit(wcs0.wcs.cunit[wcs0.wcs.spec]) outunit = u.Unit(wcs1.wcs.cunit[wcs1.wcs.spec]) # VELO-F2V out_ctype = wcs1.wcs.ctype[wcs1.wcs.spec] wcs2 = convert_spectral_axis(wcs0, outunit, out_ctype, rest_value=new_rest) sp1 = wcs1.sub([wcs.WCSSUB_SPECTRAL]) sp2 = wcs2.sub([wcs.WCSSUB_SPECTRAL]) p_old = sp1.wcs_world2pix([old_rest.to(inunit, vconv1(old_rest)).value, new_rest.to(inunit, vconv1(old_rest)).value],0) p_new = sp2.wcs_world2pix([old_rest.to(outunit, vconv2(new_rest)).value, new_rest.to(outunit, vconv2(new_rest)).value],0) assert_allclose(p_old, p_new, rtol=1e-3) assert_allclose(p_old, p_new, rtol=1e-3) # from http://classic.sdss.org/dr5/products/spectra/vacwavelength.html # these aren't accurate enough for my liking, but I can't find a better one readily air_vac = { 'H-beta':(4861.363, 4862.721)*u.AA, '[O III]':(4958.911, 4960.295)*u.AA, '[O III]':(5006.843, 5008.239)*u.AA, '[N II]':(6548.05, 6549.86)*u.AA, 'H-alpha':(6562.801, 6564.614)*u.AA, '[N II]':(6583.45, 6585.27)*u.AA, '[S II]':(6716.44, 6718.29)*u.AA, '[S II]':(6730.82, 6732.68)*u.AA, } @pytest.mark.parametrize(('air','vac'), air_vac.values()) def test_air_to_vac(air, vac): # This is the accuracy provided by the line list we have. # I'm not sure if the formula are incorrect or if the reference wavelengths # are, but this is an accuracy of only 6 km/s, which is *very bad* for # astrophysical applications. assert np.abs((air_to_vac(air)- vac)) < 0.15*u.AA assert np.abs((vac_to_air(vac)- air)) < 0.15*u.AA assert np.abs((air_to_vac(air)- vac)/vac) < 2e-5 assert np.abs((vac_to_air(vac)- air)/air) < 2e-5 # round tripping assert np.abs((vac_to_air(air_to_vac(air))-air))/air < 1e-8 assert np.abs((air_to_vac(vac_to_air(vac))-vac))/vac < 1e-8 def test_byhand_awav2vel(): # AWAV CRVAL3A = (6560*u.AA).to(u.m).value CDELT3A = (1.0*u.AA).to(u.m).value CUNIT3A = 'm' CRPIX3A = 1.0 # restwav MUST be vacuum restwl = air_to_vac(6562.81*u.AA) RESTWAV = restwl.to(u.m).value CRVAL3V = (CRVAL3A*u.m).to(u.m/u.s, u.doppler_optical(restwl)).value CDELT3V = (CDELT3A*u.m*air_to_vac_deriv(CRVAL3A*u.m)/restwl) * constants.c CUNIT3V = 'm/s' mywcs = wcs.WCS(naxis=1) mywcs.wcs.ctype[0] = 'AWAV' mywcs.wcs.crval[0] = CRVAL3A mywcs.wcs.crpix[0] = CRPIX3A mywcs.wcs.cunit[0] = CUNIT3A mywcs.wcs.cdelt[0] = CDELT3A mywcs.wcs.restwav = RESTWAV mywcs.wcs.set() newwcs = convert_spectral_axis(mywcs, u.km/u.s, determine_ctype_from_vconv(mywcs.wcs.ctype[0], u.km/u.s, 'optical')) newwcs.wcs.set() assert newwcs.wcs.cunit[0] == 'm / s' np.testing.assert_almost_equal(newwcs.wcs.crval, air_to_vac(CRVAL3A*u.m).to(u.m/u.s, u.doppler_optical(restwl)).value) # Check that the cdelts match the expected cdelt, 1 angstrom / rest # wavelength (vac) np.testing.assert_almost_equal(newwcs.wcs.cdelt, CDELT3V.to(u.m/u.s).value) # Check that the reference wavelength is 2.81 angstroms up np.testing.assert_almost_equal(newwcs.wcs_pix2world((2.81,), 0), 0.0, decimal=3) # Go through a full-on sanity check: vline = 100*u.km/u.s wave_line_vac = vline.to(u.AA, u.doppler_optical(restwl)) wave_line_air = vac_to_air(wave_line_vac) pix_line_input = mywcs.wcs_world2pix((wave_line_air.to(u.m).value,), 0) pix_line_output = newwcs.wcs_world2pix((vline.to(u.m/u.s).value,), 0) np.testing.assert_almost_equal(pix_line_output, pix_line_input, decimal=4) def test_byhand_awav2wav(): # AWAV CRVAL3A = (6560*u.AA).to(u.m).value CDELT3A = (1.0*u.AA).to(u.m).value CUNIT3A = 'm' CRPIX3A = 1.0 mywcs = wcs.WCS(naxis=1) mywcs.wcs.ctype[0] = 'AWAV' mywcs.wcs.crval[0] = CRVAL3A mywcs.wcs.crpix[0] = CRPIX3A mywcs.wcs.cunit[0] = CUNIT3A mywcs.wcs.cdelt[0] = CDELT3A mywcs.wcs.set() newwcs = convert_spectral_axis(mywcs, u.AA, 'WAVE') newwcs.wcs.set() np.testing.assert_almost_equal(newwcs.wcs_pix2world((0,),0), air_to_vac(mywcs.wcs_pix2world((0,),0)*u.m).value) np.testing.assert_almost_equal(newwcs.wcs_pix2world((10,),0), air_to_vac(mywcs.wcs_pix2world((10,),0)*u.m).value) # At least one of the components MUST change assert not (mywcs.wcs.crval[0] == newwcs.wcs.crval[0] and mywcs.wcs.crpix[0] == newwcs.wcs.crpix[0]) class test_nir_sinfoni_base(object): def setup_method(self, method): CD3_3 = 0.000245000002905726 # CD rotation matrix CTYPE3 = 'WAVE ' # wavelength axis in microns CRPIX3 = 1109. # Reference pixel in z CRVAL3 = 2.20000004768372 # central wavelength CDELT3 = 0.000245000002905726 # microns per pixel CUNIT3 = 'um ' # spectral unit SPECSYS = 'TOPOCENT' # Coordinate reference frame self.rest_wavelength = 2.1218*u.um self.mywcs = wcs.WCS(naxis=1) self.mywcs.wcs.ctype[0] = CTYPE3 self.mywcs.wcs.crval[0] = CRVAL3 self.mywcs.wcs.crpix[0] = CRPIX3 self.mywcs.wcs.cunit[0] = CUNIT3 self.mywcs.wcs.cdelt[0] = CDELT3 self.mywcs.wcs.cd = [[CD3_3]] self.mywcs.wcs.specsys = SPECSYS self.mywcs.wcs.set() self.wavelengths = np.array([[2.12160005e-06, 2.12184505e-06, 2.12209005e-06]]) np.testing.assert_almost_equal(self.mywcs.wcs_pix2world([788,789,790], 0), self.wavelengths) def test_nir_sinfoni_example_optical(self): mywcs = self.mywcs.copy() velocities_opt = ((self.wavelengths*u.m-self.rest_wavelength)/(self.wavelengths*u.m) * constants.c).to(u.km/u.s) newwcs_opt = convert_spectral_axis(mywcs, u.km/u.s, 'VOPT', rest_value=self.rest_wavelength) assert newwcs_opt.wcs.cunit[0] == u.km/u.s newwcs_opt.wcs.set() worldpix_opt = newwcs_opt.wcs_pix2world([788,789,790], 0) assert newwcs_opt.wcs.cunit[0] == u.m/u.s np.testing.assert_almost_equal(worldpix_opt, velocities_opt.to(newwcs_opt.wcs.cunit[0]).value) def test_nir_sinfoni_example_radio(self): mywcs = self.mywcs.copy() velocities_rad = ((self.wavelengths*u.m-self.rest_wavelength)/(self.rest_wavelength) * constants.c).to(u.km/u.s) newwcs_rad = convert_spectral_axis(mywcs, u.km/u.s, 'VRAD', rest_value=self.rest_wavelength) assert newwcs_rad.wcs.cunit[0] == u.km/u.s newwcs_rad.wcs.set() worldpix_rad = newwcs_rad.wcs_pix2world([788,789,790], 0) assert newwcs_rad.wcs.cunit[0] == u.m/u.s np.testing.assert_almost_equal(worldpix_rad, velocities_rad.to(newwcs_rad.wcs.cunit[0]).value) def test_equivalencies(): """ Testing spectral equivalencies """ # range in "RADIO" with "100 * u.GHz" as rest frequancy range = u.Quantity([-318 * u.km / u.s, -320 * u.km / u.s]) # range in freq r1 = range.to("GHz", equivalencies=u.doppler_radio(100 * u.GHz)) # round conversion for "doppler_z" r2 = r1.to("km/s", equivalencies=doppler_z(100 * u.GHz)) r3 = r2.to("GHz", equivalencies=doppler_z(100*u.GHz)) assert_quantity_allclose(r1, r3) # round conversion for "doppler_beta" r2 = r1.to("km/s", equivalencies=doppler_beta(100 * u.GHz)) r3 = r2.to("GHz", equivalencies=doppler_beta(100 * u.GHz)) assert_quantity_allclose(r1, r3) # round conversion for "doppler_gamma" r2 = r1.to("km/s", equivalencies=doppler_gamma(100 * u.GHz)) r3 = r2.to("GHz", equivalencies=doppler_gamma(100 * u.GHz)) assert_quantity_allclose(r1, r3) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1633017852.0 spectral-cube-0.6.0/spectral_cube/tests/test_spectral_cube.py0000644000175100001710000026665000000000000024026 0ustar00runnerdockerfrom __future__ import print_function, absolute_import, division import re import copy import operator import itertools import warnings import mmap from distutils.version import LooseVersion import sys import pytest import astropy from astropy import stats from astropy.io import fits from astropy import units as u from astropy.wcs import WCS from astropy.wcs import _wcs from astropy.tests.helper import assert_quantity_allclose from astropy.convolution import Gaussian2DKernel, Tophat2DKernel import numpy as np from .. import (BooleanArrayMask, FunctionMask, LazyMask, CompositeMask) from ..spectral_cube import (OneDSpectrum, Projection, VaryingResolutionOneDSpectrum, LowerDimensionalObject) from ..np_compat import allbadtonan from .. import spectral_axis from .. import base_class from .. import utils from .. import SpectralCube, VaryingResolutionSpectralCube, DaskSpectralCube from . import path from .helpers import assert_allclose, assert_array_equal try: import casatools ia = casatools.image() casaOK = True except ImportError: try: from taskinit import ia casaOK = True except ImportError: casaOK = False WINDOWS = sys.platform == "win32" # needed to test for warnings later warnings.simplefilter('always', UserWarning) warnings.simplefilter('error', utils.UnsupportedIterationStrategyWarning) warnings.simplefilter('error', utils.NotImplementedWarning) warnings.simplefilter('error', utils.WCSMismatchWarning) warnings.simplefilter('error', FutureWarning) warnings.filterwarnings(action='ignore', category=FutureWarning, module='reproject') try: import yt YT_INSTALLED = True YT_LT_301 = LooseVersion(yt.__version__) < LooseVersion('3.0.1') except ImportError: YT_INSTALLED = False YT_LT_301 = False try: import scipy scipyOK = True except ImportError: scipyOK = False import os # if ON_TRAVIS is set, we're on travis. on_travis = bool(os.environ.get('ON_TRAVIS')) from radio_beam import Beam, Beams from radio_beam.utils import BeamError NUMPY_LT_19 = LooseVersion(np.__version__) < LooseVersion('1.9.0') def cube_and_raw(filename, use_dask=None): if use_dask is None: raise ValueError('use_dask should be explicitly set') p = path(filename) if os.path.splitext(p)[-1] == '.fits': with fits.open(p) as hdulist: d = hdulist[0].data c = SpectralCube.read(p, format='fits', mode='readonly', use_dask=use_dask) elif os.path.splitext(p)[-1] == '.image': ia.open(p) d = ia.getchunk() ia.unlock() ia.close() ia.done() c = SpectralCube.read(p, format='casa_image', use_dask=use_dask) else: raise ValueError("Unsupported filetype") return c, d def test_arithmetic_warning(data_vda_jybeam_lower, recwarn, use_dask): cube, data = cube_and_raw(data_vda_jybeam_lower, use_dask=use_dask) assert not cube._is_huge # make sure the small cube raises a warning about loading into memory with pytest.warns(UserWarning, match='requires loading the entire'): cube + 5*cube.unit def test_huge_disallowed(data_vda_jybeam_lower, use_dask): cube, data = cube_and_raw(data_vda_jybeam_lower, use_dask=use_dask) assert not cube._is_huge # We need to reduce the memory threshold rather than use a large cube to # make sure we don't use too much memory during testing. from .. import cube_utils OLD_MEMORY_THRESHOLD = cube_utils.MEMORY_THRESHOLD try: cube_utils.MEMORY_THRESHOLD = 10 assert cube._is_huge with pytest.raises(ValueError, match='entire cube into memory'): cube + 5*cube.unit if use_dask: with pytest.raises(ValueError, match='entire cube into memory'): cube.mad_std() else: with pytest.raises(ValueError, match='entire cube into memory'): cube.max(how='cube') cube.allow_huge_operations = True # just make sure it doesn't fail cube + 5*cube.unit finally: cube_utils.MEMORY_THRESHOLD = OLD_MEMORY_THRESHOLD del cube class BaseTest(object): @pytest.fixture(autouse=True) def setup_method_fixture(self, request, data_adv, use_dask): c, d = cube_and_raw(data_adv, use_dask=use_dask) mask = BooleanArrayMask(d > 0.5, c._wcs) c._mask = mask self.c = c self.mask = mask self.d = d class BaseTestMultiBeams(object): @pytest.fixture(autouse=True) def setup_method_fixture(self, request, data_adv_beams, use_dask): c, d = cube_and_raw(data_adv_beams, use_dask=use_dask) mask = BooleanArrayMask(d > 0.5, c._wcs) c._mask = mask self.c = c self.mask = mask self.d = d @pytest.fixture def filename(request): return request.getfixturevalue(request.param) translist = [('data_advs', [0, 1, 2, 3]), ('data_dvsa', [2, 3, 0, 1]), ('data_sdav', [0, 2, 1, 3]), ('data_sadv', [0, 1, 2, 3]), ('data_vsad', [3, 0, 1, 2]), ('data_vad', [2, 0, 1]), ('data_vda', [0, 2, 1]), ('data_adv', [0, 1, 2]), ] translist_vrsc = [('data_vda_beams', [0, 2, 1])] class TestSpectralCube(object): @pytest.mark.parametrize(('filename', 'trans'), translist + translist_vrsc, indirect=['filename']) def test_consistent_transposition(self, filename, trans, use_dask): """data() should return velocity axis first, then world 1, then world 0""" c, d = cube_and_raw(filename, use_dask=use_dask) expected = np.squeeze(d.transpose(trans)) assert_allclose(c._get_filled_data(), expected) @pytest.mark.parametrize(('filename', 'view'), ( ('data_adv', np.s_[:, :,:]), ('data_adv', np.s_[::2, :, :2]), ('data_adv', np.s_[0]), ), indirect=['filename']) def test_world(self, filename, view, use_dask): p = path(filename) # d = fits.getdata(p) # wcs = WCS(p) # c = SpectralCube(d, wcs) c = SpectralCube.read(p) wcs = c.wcs # shp = d.shape # inds = np.indices(d.shape) shp = c.shape inds = np.indices(c.shape) pix = np.column_stack([i.ravel() for i in inds[::-1]]) world = wcs.all_pix2world(pix, 0).T world = [w.reshape(shp) for w in world] world = [w[view] * u.Unit(wcs.wcs.cunit[i]) for i, w in enumerate(world)][::-1] w2 = c.world[view] for result, expected in zip(w2, world): assert_allclose(result, expected) # Test world_flattened here, too w2_flat = c.flattened_world(view=view) for result, expected in zip(w2_flat, world): print(result.shape, expected.flatten().shape) assert_allclose(result, expected.flatten()) @pytest.mark.parametrize('view', (np.s_[:, :,:], np.s_[:2, :3, ::2])) def test_world_transposes_3d(self, view, data_adv, data_vad, use_dask): c1, d1 = cube_and_raw(data_adv, use_dask=use_dask) c2, d2 = cube_and_raw(data_vad, use_dask=use_dask) for w1, w2 in zip(c1.world[view], c2.world[view]): assert_allclose(w1, w2) @pytest.mark.parametrize('view', (np.s_[:, :,:], np.s_[:2, :3, ::2], np.s_[::3, ::2, :1], np.s_[:], )) def test_world_transposes_4d(self, view, data_advs, data_sadv, use_dask): c1, d1 = cube_and_raw(data_advs, use_dask=use_dask) c2, d2 = cube_and_raw(data_sadv, use_dask=use_dask) for w1, w2 in zip(c1.world[view], c2.world[view]): assert_allclose(w1, w2) @pytest.mark.parametrize(('filename','masktype','unit','suffix'), itertools.product(('data_advs', 'data_dvsa', 'data_sdav', 'data_sadv', 'data_vsad', 'data_vad', 'data_adv',), (BooleanArrayMask, LazyMask, FunctionMask, CompositeMask), ('Hz', u.Hz), ('.fits', '.image') if casaOK else ('.fits',) ), indirect=['filename']) def test_with_spectral_unit(self, filename, masktype, unit, suffix, use_dask): if suffix == '.image': if not use_dask: pytest.skip() import casatasks filename = str(filename) casatasks.importfits(filename, filename.replace('.fits', '.image')) filename = filename.replace('.fits', '.image') cube, data = cube_and_raw(filename, use_dask=use_dask) cube_freq = cube.with_spectral_unit(unit) if masktype == BooleanArrayMask: # don't use data here: # data haven't necessarily been rearranged to the correct shape by # cube_utils.orient mask = BooleanArrayMask(cube.filled_data[:].value>0, wcs=cube._wcs) elif masktype == LazyMask: mask = LazyMask(lambda x: x>0, cube=cube) elif masktype == FunctionMask: mask = FunctionMask(lambda x: x>0) elif masktype == CompositeMask: mask1 = FunctionMask(lambda x: x>0) mask2 = LazyMask(lambda x: x>0, cube) mask = CompositeMask(mask1, mask2) cube2 = cube.with_mask(mask) cube_masked_freq = cube2.with_spectral_unit(unit) if suffix == '.fits': assert cube_freq._wcs.wcs.ctype[cube_freq._wcs.wcs.spec] == 'FREQ-W2F' assert cube_masked_freq._wcs.wcs.ctype[cube_masked_freq._wcs.wcs.spec] == 'FREQ-W2F' assert cube_masked_freq._mask._wcs.wcs.ctype[cube_masked_freq._mask._wcs.wcs.spec] == 'FREQ-W2F' elif suffix == '.image': # this is *not correct* but it's a known failure in CASA: CASA's # image headers don't support any of the FITS spectral standard, so # it just ends up as 'FREQ'. This isn't on us to fix so this is # really an "xfail" that we hope will change... assert cube_freq._wcs.wcs.ctype[cube_freq._wcs.wcs.spec] == 'FREQ' assert cube_masked_freq._wcs.wcs.ctype[cube_masked_freq._wcs.wcs.spec] == 'FREQ' assert cube_masked_freq._mask._wcs.wcs.ctype[cube_masked_freq._mask._wcs.wcs.spec] == 'FREQ' # values taken from header rest = 1.42040571841E+09*u.Hz crval = -3.21214698632E+05*u.m/u.s outcv = crval.to(u.m, u.doppler_optical(rest)).to(u.Hz, u.spectral()) assert_allclose(cube_freq._wcs.wcs.crval[cube_freq._wcs.wcs.spec], outcv.to(u.Hz).value) assert_allclose(cube_masked_freq._wcs.wcs.crval[cube_masked_freq._wcs.wcs.spec], outcv.to(u.Hz).value) assert_allclose(cube_masked_freq._mask._wcs.wcs.crval[cube_masked_freq._mask._wcs.wcs.spec], outcv.to(u.Hz).value) @pytest.mark.parametrize(('operation', 'value'), ((operator.add, 0.5*u.K), (operator.sub, 0.5*u.K), (operator.mul, 0.5*u.K), (operator.truediv, 0.5*u.K), (operator.div if hasattr(operator,'div') else operator.floordiv, 0.5*u.K), )) def test_apply_everywhere(self, operation, value, data_advs, use_dask): c1, d1 = cube_and_raw(data_advs, use_dask=use_dask) # append 'o' to indicate that it has been operated on c1o = c1._apply_everywhere(operation, value) d1o = operation(u.Quantity(d1, u.K), value) assert np.all(d1o == c1o.filled_data[:]) # allclose fails on identical data? #assert_allclose(d1o, c1o.filled_data[:]) @pytest.mark.parametrize(('filename', 'trans'), translist, indirect=['filename']) def test_getitem(self, filename, trans, use_dask): c, d = cube_and_raw(filename, use_dask=use_dask) expected = np.squeeze(d.transpose(trans)) assert_allclose(c[0,:,:].value, expected[0,:,:]) assert_allclose(c[:,:,0].value, expected[:,:,0]) assert_allclose(c[:,0,:].value, expected[:,0,:]) # Not implemented: #assert_allclose(c[0,0,:].value, expected[0,0,:]) #assert_allclose(c[0,:,0].value, expected[0,:,0]) assert_allclose(c[:,0,0].value, expected[:,0,0]) assert_allclose(c[1,:,:].value, expected[1,:,:]) assert_allclose(c[:,:,1].value, expected[:,:,1]) assert_allclose(c[:,1,:].value, expected[:,1,:]) # Not implemented: #assert_allclose(c[1,1,:].value, expected[1,1,:]) #assert_allclose(c[1,:,1].value, expected[1,:,1]) assert_allclose(c[:,1,1].value, expected[:,1,1]) c2 = c.with_spectral_unit(u.km/u.s, velocity_convention='radio') assert_allclose(c2[0,:,:].value, expected[0,:,:]) assert_allclose(c2[:,:,0].value, expected[:,:,0]) assert_allclose(c2[:,0,:].value, expected[:,0,:]) # Not implemented: #assert_allclose(c2[0,0,:].value, expected[0,0,:]) #assert_allclose(c2[0,:,0].value, expected[0,:,0]) assert_allclose(c2[:,0,0].value, expected[:,0,0]) assert_allclose(c2[1,:,:].value, expected[1,:,:]) assert_allclose(c2[:,:,1].value, expected[:,:,1]) assert_allclose(c2[:,1,:].value, expected[:,1,:]) # Not implemented: #assert_allclose(c2[1,1,:].value, expected[1,1,:]) #assert_allclose(c2[1,:,1].value, expected[1,:,1]) assert_allclose(c2[:,1,1].value, expected[:,1,1]) @pytest.mark.parametrize(('filename', 'trans'), translist_vrsc, indirect=['filename']) def test_getitem_vrsc(self, filename, trans, use_dask): c, d = cube_and_raw(filename, use_dask=use_dask) expected = np.squeeze(d.transpose(trans)) # No pv slices for VRSC. assert_allclose(c[0,:,:].value, expected[0,:,:]) # Not implemented: #assert_allclose(c[0,0,:].value, expected[0,0,:]) #assert_allclose(c[0,:,0].value, expected[0,:,0]) assert_allclose(c[:,0,0].value, expected[:,0,0]) assert_allclose(c[1,:,:].value, expected[1,:,:]) # Not implemented: #assert_allclose(c[1,1,:].value, expected[1,1,:]) #assert_allclose(c[1,:,1].value, expected[1,:,1]) assert_allclose(c[:,1,1].value, expected[:,1,1]) c2 = c.with_spectral_unit(u.km/u.s, velocity_convention='radio') assert_allclose(c2[0,:,:].value, expected[0,:,:]) # Not implemented: #assert_allclose(c2[0,0,:].value, expected[0,0,:]) #assert_allclose(c2[0,:,0].value, expected[0,:,0]) assert_allclose(c2[:,0,0].value, expected[:,0,0]) assert_allclose(c2[1,:,:].value, expected[1,:,:]) # Not implemented: #assert_allclose(c2[1,1,:].value, expected[1,1,:]) #assert_allclose(c2[1,:,1].value, expected[1,:,1]) assert_allclose(c2[:,1,1].value, expected[:,1,1]) class TestArithmetic(object): # FIXME: in the tests below we need to manually do self.c1 = self.d1 = None # because if we try and do this in a teardown method, the open-files check # gets done first. This is an issue that should be resolved in pytest-openfiles. @pytest.fixture(autouse=True) def setup_method_fixture(self, request, data_adv_simple, use_dask): self.c1, self.d1 = cube_and_raw(data_adv_simple, use_dask=use_dask) @pytest.mark.parametrize(('value'),(1,1.0,2,2.0)) def test_add(self,value): d2 = self.d1 + value c2 = self.c1 + value*u.K assert np.all(d2 == c2.filled_data[:].value) assert c2.unit == u.K self.c1 = self.d1 = None def test_add_cubes(self): d2 = self.d1 + self.d1 c2 = self.c1 + self.c1 assert np.all(d2 == c2.filled_data[:].value) assert c2.unit == u.K self.c1 = self.d1 = None @pytest.mark.parametrize(('value'),(1,1.0,2,2.0)) def test_subtract(self, value): d2 = self.d1 - value c2 = self.c1 - value*u.K assert np.all(d2 == c2.filled_data[:].value) assert c2.unit == u.K # regression test #251: the _data attribute must not be a quantity assert not hasattr(c2._data, 'unit') self.c1 = self.d1 = None def test_subtract_cubes(self): d2 = self.d1 - self.d1 c2 = self.c1 - self.c1 assert np.all(d2 == c2.filled_data[:].value) assert np.all(c2.filled_data[:].value == 0) assert c2.unit == u.K # regression test #251: the _data attribute must not be a quantity assert not hasattr(c2._data, 'unit') self.c1 = self.d1 = None @pytest.mark.parametrize(('value'),(1,1.0,2,2.0)) def test_mul(self, value): d2 = self.d1 * value c2 = self.c1 * value assert np.all(d2 == c2.filled_data[:].value) assert c2.unit == u.K self.c1 = self.d1 = None def test_mul_cubes(self): d2 = self.d1 * self.d1 c2 = self.c1 * self.c1 assert np.all(d2 == c2.filled_data[:].value) assert c2.unit == u.K**2 self.c1 = self.d1 = None @pytest.mark.parametrize(('value'),(1,1.0,2,2.0)) def test_div(self, value): d2 = self.d1 / value c2 = self.c1 / value assert np.all(d2 == c2.filled_data[:].value) assert c2.unit == u.K self.c1 = self.d1 = None def test_div_cubes(self): d2 = self.d1 / self.d1 c2 = self.c1 / self.c1 assert np.all((d2 == c2.filled_data[:].value) | (np.isnan(c2.filled_data[:]))) assert np.all((c2.filled_data[:] == 1) | (np.isnan(c2.filled_data[:]))) assert c2.unit == u.one self.c1 = self.d1 = None @pytest.mark.parametrize(('value'), (1,1.0,2,2.0)) def test_pow(self, value): d2 = self.d1 ** value c2 = self.c1 ** value assert np.all(d2 == c2.filled_data[:].value) assert c2.unit == u.K**value self.c1 = self.d1 = None def test_cube_add(self): c2 = self.c1 + self.c1 d2 = self.d1 + self.d1 assert np.all(d2 == c2.filled_data[:].value) assert c2.unit == u.K self.c1 = self.d1 = None class TestFilters(BaseTest): def test_mask_data(self): c, d = self.c, self.d expected = np.where(d > .5, d, np.nan) assert_allclose(c._get_filled_data(), expected) expected = np.where(d > .5, d, 0) assert_allclose(c._get_filled_data(fill=0), expected) self.c = self.d = None @pytest.mark.parametrize('operation', (operator.lt, operator.gt, operator.le, operator.ge)) def test_mask_comparison(self, operation): c, d = self.c, self.d dmask = operation(d, 0.6) & self.c.mask.include() cmask = operation(c, 0.6*u.K) assert (self.c.mask.include() & cmask.include()).sum() == dmask.sum() assert np.all(c.with_mask(cmask).mask.include() == dmask) np.testing.assert_almost_equal(c.with_mask(cmask).sum().value, d[dmask].sum()) self.c = self.d = None def test_flatten(self): c, d = self.c, self.d expected = d[d > 0.5] assert_allclose(c.flattened(), expected) self.c = self.d = None def test_flatten_weights(self): c, d = self.c, self.d expected = d[d > 0.5] ** 2 assert_allclose(c.flattened(weights=d), expected) self.c = self.d = None def test_slice(self): c, d = self.c, self.d expected = d[:3, :2, ::2] expected = expected[expected > 0.5] assert_allclose(c[0:3, 0:2, 0::2].flattened(), expected) self.c = self.d = None class TestNumpyMethods(BaseTest): def _check_numpy(self, cubemethod, array, func): for axis in [None, 0, 1, 2]: for how in ['auto', 'slice', 'cube', 'ray']: expected = func(array, axis=axis) actual = cubemethod(axis=axis) assert_allclose(actual, expected) def test_sum(self): d = np.where(self.d > 0.5, self.d, np.nan) self._check_numpy(self.c.sum, d, allbadtonan(np.nansum)) # Need a secondary check to make sure it works with no # axis keyword being passed (regression test for issue introduced in # 150) assert np.all(self.c.sum().value == np.nansum(d)) self.c = self.d = None def test_max(self): d = np.where(self.d > 0.5, self.d, np.nan) self._check_numpy(self.c.max, d, np.nanmax) self.c = self.d = None def test_min(self): d = np.where(self.d > 0.5, self.d, np.nan) self._check_numpy(self.c.min, d, np.nanmin) self.c = self.d = None def test_argmax(self): d = np.where(self.d > 0.5, self.d, -10) self._check_numpy(self.c.argmax, d, np.nanargmax) self.c = self.d = None def test_argmin(self): d = np.where(self.d > 0.5, self.d, 10) self._check_numpy(self.c.argmin, d, np.nanargmin) self.c = self.d = None @pytest.mark.parametrize('iterate_rays', (True,False)) def test_median(self, iterate_rays, use_dask): # Make sure that medians ignore empty/bad/NaN values m = np.empty(self.d.shape[1:]) for y in range(m.shape[0]): for x in range(m.shape[1]): ray = self.d[:, y, x] # the cube mask is for values >0.5 ray = ray[ray > 0.5] m[y, x] = np.median(ray) if use_dask: if iterate_rays: self.c = self.d = None pytest.skip() else: scmed = self.c.median(axis=0) else: scmed = self.c.median(axis=0, iterate_rays=iterate_rays) assert_allclose(scmed, m) assert not np.any(np.isnan(scmed.value)) assert scmed.unit == self.c.unit self.c = self.d = None @pytest.mark.skipif('NUMPY_LT_19') def test_bad_median_apply(self): # this is a test for manually-applied numpy medians, which are different # from the cube.median method that does "the right thing" # # for regular median, we expect a failure, which is why we don't use # regular median. scmed = self.c.apply_numpy_function(np.median, axis=0) # this checks whether numpy <=1.9.3 has a bug? # as far as I can tell, np==1.9.3 no longer has this bug/feature #if LooseVersion(np.__version__) <= LooseVersion('1.9.3'): # # print statements added so we get more info in the travis builds # print("Numpy version is: {0}".format(LooseVersion(np.__version__))) # assert np.count_nonzero(np.isnan(scmed)) == 5 #else: # print("Numpy version is: {0}".format(LooseVersion(np.__version__))) assert np.count_nonzero(np.isnan(scmed)) == 6 scmed = self.c.apply_numpy_function(np.nanmedian, axis=0) assert np.count_nonzero(np.isnan(scmed)) == 0 # use a more aggressive mask to force there to be some all-nan axes m2 = self.c>0.74*self.c.unit scmed = self.c.with_mask(m2).apply_numpy_function(np.nanmedian, axis=0) assert np.count_nonzero(np.isnan(scmed)) == 1 self.c = self.d = None @pytest.mark.parametrize('iterate_rays', (True,False)) def test_bad_median(self, iterate_rays, use_dask): # This should have the same result as np.nanmedian, though it might be # faster if bottleneck loads if use_dask: if iterate_rays: self.c = self.d = None pytest.skip() else: scmed = self.c.median(axis=0) else: scmed = self.c.median(axis=0, iterate_rays=iterate_rays) assert np.count_nonzero(np.isnan(scmed)) == 0 m2 = self.c>0.74*self.c.unit if use_dask: scmed = self.c.with_mask(m2).median(axis=0) else: scmed = self.c.with_mask(m2).median(axis=0, iterate_rays=iterate_rays) assert np.count_nonzero(np.isnan(scmed)) == 1 self.c = self.d = None @pytest.mark.parametrize(('pct', 'iterate_rays'), (zip((3,25,50,75,97)*2,(True,)*5 + (False,)*5))) def test_percentile(self, pct, iterate_rays, use_dask): m = np.empty(self.d.sum(axis=0).shape) for y in range(m.shape[0]): for x in range(m.shape[1]): ray = self.d[:, y, x] ray = ray[ray > 0.5] m[y, x] = np.percentile(ray, pct) if use_dask: if iterate_rays: self.c = self.d = None pytest.skip() else: scpct = self.c.percentile(pct, axis=0) else: scpct = self.c.percentile(pct, axis=0, iterate_rays=iterate_rays) assert_allclose(scpct, m) assert not np.any(np.isnan(scpct.value)) assert scpct.unit == self.c.unit self.c = self.d = None @pytest.mark.parametrize('method', ('sum', 'min', 'max', 'std', 'mad_std', 'median', 'argmin', 'argmax')) def test_transpose(self, method, data_adv, data_vad, use_dask): c1, d1 = cube_and_raw(data_adv, use_dask=use_dask) c2, d2 = cube_and_raw(data_vad, use_dask=use_dask) for axis in [None, 0, 1, 2]: assert_allclose(getattr(c1, method)(axis=axis), getattr(c2, method)(axis=axis)) if not use_dask: # check that all these accept progressbar kwargs assert_allclose(getattr(c1, method)(axis=axis, progressbar=True), getattr(c2, method)(axis=axis, progressbar=True)) self.c = self.d = None @pytest.mark.parametrize('method', ('argmax_world', 'argmin_world')) def test_transpose_arg_world(self, method, data_adv, data_vad, use_dask): c1, d1 = cube_and_raw(data_adv, use_dask=use_dask) c2, d2 = cube_and_raw(data_vad, use_dask=use_dask) # The spectral axis should work in all of these test cases. axis = 0 assert_allclose(getattr(c1, method)(axis=axis), getattr(c2, method)(axis=axis)) if not use_dask: # check that all these accept progressbar kwargs assert_allclose(getattr(c1, method)(axis=axis, progressbar=True), getattr(c2, method)(axis=axis, progressbar=True)) # But the spatial axes should fail since the pixel axes are correlated to # the WCS celestial axes. Currently this will happen for ALL celestial axes. for axis in [1, 2]: with pytest.raises(utils.WCSCelestialError, match=re.escape(f"{method} requires the celestial axes")): assert_allclose(getattr(c1, method)(axis=axis), getattr(c2, method)(axis=axis)) self.c = self.d = None @pytest.mark.parametrize('method', ('argmax_world', 'argmin_world')) def test_arg_world(self, method, data_adv, use_dask): c1, d1 = cube_and_raw(data_adv, use_dask=use_dask) # Pixel operation is same name with "_world" removed. arg0_pixel = getattr(c1, method.split("_")[0])(axis=0) arg0_world = np.take_along_axis(c1.spectral_axis[:, np.newaxis, np.newaxis], arg0_pixel[np.newaxis, :, :], axis=0).squeeze() assert_allclose(getattr(c1, method)(axis=0), arg0_world) self.c = self.d = None class TestSlab(BaseTest): def test_closest_spectral_channel(self): c = self.c ms = u.m / u.s assert c.closest_spectral_channel(-321214.698632 * ms) == 0 assert c.closest_spectral_channel(-319926.48366321 * ms) == 1 assert c.closest_spectral_channel(-318638.26869442 * ms) == 2 assert c.closest_spectral_channel(-320000 * ms) == 1 assert c.closest_spectral_channel(-340000 * ms) == 0 assert c.closest_spectral_channel(0 * ms) == 3 self.c = self.d = None def test_spectral_channel_bad_units(self): with pytest.raises(u.UnitsError, match=re.escape("'value' should be in frequency equivalent or velocity units (got s)")): self.c.closest_spectral_channel(1 * u.s) with pytest.raises(u.UnitsError, match=re.escape("Spectral axis is in velocity units and 'value' is in frequency-equivalent units - use SpectralCube.with_spectral_unit first to convert the cube to frequency-equivalent units, or search for a velocity instead")): self.c.closest_spectral_channel(1. * u.Hz) self.c = self.d = None def test_slab(self): ms = u.m / u.s c2 = self.c.spectral_slab(-320000 * ms, -318600 * ms) assert_allclose(c2._data, self.d[1:3]) assert c2._mask is not None self.c = self.d = None def test_slab_reverse_limits(self): ms = u.m / u.s c2 = self.c.spectral_slab(-318600 * ms, -320000 * ms) assert_allclose(c2._data, self.d[1:3]) assert c2._mask is not None self.c = self.d = None def test_slab_preserves_wcs(self): # regression test ms = u.m / u.s crpix = list(self.c._wcs.wcs.crpix) self.c.spectral_slab(-318600 * ms, -320000 * ms) assert list(self.c._wcs.wcs.crpix) == crpix self.c = self.d = None class TestSlabMultiBeams(BaseTestMultiBeams, TestSlab): """ same tests with multibeams """ pass # class TestRepr(BaseTest): # def test_repr(self): # assert repr(self.c) == """ # SpectralCube with shape=(4, 3, 2) and unit=K: # n_x: 2 type_x: RA---SIN unit_x: deg range: 24.062698 deg: 24.063349 deg # n_y: 3 type_y: DEC--SIN unit_y: deg range: 29.934094 deg: 29.935209 deg # n_s: 4 type_s: VOPT unit_s: km / s range: -321.215 km / s: -317.350 km / s # """.strip() # self.c = self.d = None # def test_repr_withunit(self): # self.c._unit = u.Jy # assert repr(self.c) == """ # SpectralCube with shape=(4, 3, 2) and unit=Jy: # n_x: 2 type_x: RA---SIN unit_x: deg range: 24.062698 deg: 24.063349 deg # n_y: 3 type_y: DEC--SIN unit_y: deg range: 29.934094 deg: 29.935209 deg # n_s: 4 type_s: VOPT unit_s: km / s range: -321.215 km / s: -317.350 km / s # """.strip() # self.c = self.d = None @pytest.mark.skipif('not YT_INSTALLED') class TestYt(): @pytest.fixture(autouse=True) def setup_method_fixture(self, request, data_adv, use_dask): print("HERE") self.cube = SpectralCube.read(data_adv, use_dask=use_dask) # Without any special arguments print(self.cube) print(self.cube.to_yt) self.ytc1 = self.cube.to_yt() # With spectral factor = 0.5 self.spectral_factor = 0.5 self.ytc2 = self.cube.to_yt(spectral_factor=self.spectral_factor) # With nprocs = 4 self.nprocs = 4 self.ytc3 = self.cube.to_yt(nprocs=self.nprocs) print("DONE") def test_yt(self): # The following assertions just make sure everything is # kosher with the datasets generated in different ways ytc1,ytc2,ytc3 = self.ytc1,self.ytc2,self.ytc3 ds1,ds2,ds3 = ytc1.dataset, ytc2.dataset, ytc3.dataset assert_array_equal(ds1.domain_dimensions, ds2.domain_dimensions) assert_array_equal(ds2.domain_dimensions, ds3.domain_dimensions) assert_allclose(ds1.domain_left_edge.value, ds2.domain_left_edge.value) assert_allclose(ds2.domain_left_edge.value, ds3.domain_left_edge.value) assert_allclose(ds1.domain_width.value, ds2.domain_width.value*np.array([1,1,1.0/self.spectral_factor])) assert_allclose(ds1.domain_width.value, ds3.domain_width.value) assert self.nprocs == len(ds3.index.grids) ds1.index ds2.index ds3.index unit1 = ds1.field_info["fits","flux"].units unit2 = ds2.field_info["fits","flux"].units unit3 = ds3.field_info["fits","flux"].units ds1.quan(1.0,unit1) ds2.quan(1.0,unit2) ds3.quan(1.0,unit3) self.cube = self.ytc1 = self.ytc2 = self.ytc3 = None @pytest.mark.skipif('YT_LT_301', reason='yt 3.0 has a FITS-related bug') def test_yt_fluxcompare(self): # Now check that we can compute quantities of the flux # and that they are equal ytc1,ytc2,ytc3 = self.ytc1,self.ytc2,self.ytc3 ds1,ds2,ds3 = ytc1.dataset, ytc2.dataset, ytc3.dataset dd1 = ds1.all_data() dd2 = ds2.all_data() dd3 = ds3.all_data() flux1_tot = dd1.quantities.total_quantity("flux") flux2_tot = dd2.quantities.total_quantity("flux") flux3_tot = dd3.quantities.total_quantity("flux") flux1_min, flux1_max = dd1.quantities.extrema("flux") flux2_min, flux2_max = dd2.quantities.extrema("flux") flux3_min, flux3_max = dd3.quantities.extrema("flux") assert flux1_tot == flux2_tot assert flux1_tot == flux3_tot assert flux1_min == flux2_min assert flux1_min == flux3_min assert flux1_max == flux2_max assert flux1_max == flux3_max self.cube = self.ytc1 = self.ytc2 = self.ytc3 = None def test_yt_roundtrip_wcs(self): # Now test round-trip conversions between yt and world coordinates ytc1,ytc2,ytc3 = self.ytc1,self.ytc2,self.ytc3 ds1,ds2,ds3 = ytc1.dataset, ytc2.dataset, ytc3.dataset yt_coord1 = ds1.domain_left_edge + np.random.random(size=3)*ds1.domain_width world_coord1 = ytc1.yt2world(yt_coord1) assert_allclose(ytc1.world2yt(world_coord1), yt_coord1.value) yt_coord2 = ds2.domain_left_edge + np.random.random(size=3)*ds2.domain_width world_coord2 = ytc2.yt2world(yt_coord2) assert_allclose(ytc2.world2yt(world_coord2), yt_coord2.value) yt_coord3 = ds3.domain_left_edge + np.random.random(size=3)*ds3.domain_width world_coord3 = ytc3.yt2world(yt_coord3) assert_allclose(ytc3.world2yt(world_coord3), yt_coord3.value) self.cube = self.ytc1 = self.ytc2 = self.ytc3 = None def test_read_write_rountrip(tmpdir, data_adv, use_dask): cube = SpectralCube.read(data_adv, use_dask=use_dask) tmp_file = str(tmpdir.join('test.fits')) cube.write(tmp_file) cube2 = SpectralCube.read(tmp_file, use_dask=use_dask) assert cube.shape == cube.shape assert_allclose(cube._data, cube2._data) if (((hasattr(_wcs, '__version__') and LooseVersion(_wcs.__version__) < LooseVersion('5.9')) or not hasattr(_wcs, '__version__'))): # see https://github.com/astropy/astropy/pull/3992 for reasons: # we should upgrade this for 5.10 when the absolute accuracy is # maximized assert cube._wcs.to_header_string() == cube2._wcs.to_header_string() # in 5.11 and maybe even 5.12, the round trip fails. Maybe # https://github.com/astropy/astropy/issues/4292 will solve it? @pytest.mark.parametrize(('memmap', 'base'), ((True, mmap.mmap), (False, None))) def test_read_memmap(memmap, base, data_adv): cube = SpectralCube.read(data_adv, memmap=memmap) bb = cube.base while hasattr(bb, 'base'): bb = bb.base if base is None: assert bb is None else: assert isinstance(bb, base) def _dummy_cube(use_dask): data = np.array([[[0, 1, 2, 3, 4]]]) wcs = WCS(naxis=3) wcs.wcs.ctype = ['RA---TAN', 'DEC--TAN', 'VELO-HEL'] def lower_threshold(data, wcs, view=()): return data[view] > 0 m1 = FunctionMask(lower_threshold) cube = SpectralCube(data, wcs=wcs, mask=m1, use_dask=use_dask) return cube def test_with_mask(use_dask): def upper_threshold(data, wcs, view=()): return data[view] < 3 m2 = FunctionMask(upper_threshold) cube = _dummy_cube(use_dask) cube2 = cube.with_mask(m2) assert_allclose(cube._get_filled_data(), [[[np.nan, 1, 2, 3, 4]]]) assert_allclose(cube2._get_filled_data(), [[[np.nan, 1, 2, np.nan, np.nan]]]) def test_with_mask_with_boolean_array(use_dask): cube = _dummy_cube(use_dask) mask = np.random.random(cube.shape) > 0.5 cube2 = cube.with_mask(mask, inherit_mask=False) assert isinstance(cube2._mask, BooleanArrayMask) assert cube2._mask._wcs is cube._wcs assert cube2._mask._mask is mask def test_with_mask_with_good_array_shape(use_dask): cube = _dummy_cube(use_dask) mask = np.zeros((1, 5), dtype=np.bool) cube2 = cube.with_mask(mask, inherit_mask=False) assert isinstance(cube2._mask, BooleanArrayMask) np.testing.assert_equal(cube2._mask._mask, mask.reshape((1, 1, 5))) def test_with_mask_with_bad_array_shape(use_dask): cube = _dummy_cube(use_dask) mask = np.zeros((5, 5), dtype=np.bool) with pytest.raises(ValueError) as exc: cube.with_mask(mask) assert exc.value.args[0] == ("Mask shape is not broadcastable to data shape: " "(5, 5) vs (1, 1, 5)") class TestMasks(BaseTest): @pytest.mark.parametrize('op', (operator.gt, operator.lt, operator.le, operator.ge)) def test_operator_threshold(self, op): # choose thresh to exercise proper equality tests thresh = self.d.ravel()[0] m = op(self.c, thresh*u.K) self.c._mask = m expected = self.d[op(self.d, thresh)] actual = self.c.flattened() assert_allclose(actual, expected) self.c = self.d = None def test_preserve_spectral_unit(data_advs, use_dask): # astropy.wcs has a tendancy to change spectral units from e.g. km/s to # m/s, so we have a workaround - check that it works. cube, data = cube_and_raw(data_advs, use_dask=use_dask) cube_freq = cube.with_spectral_unit(u.GHz) assert cube_freq.wcs.wcs.cunit[2] == 'Hz' # check internal assert cube_freq.spectral_axis.unit is u.GHz # Check that this preferred unit is propagated new_cube = cube_freq.with_fill_value(fill_value=3.4) assert new_cube.spectral_axis.unit is u.GHz def test_endians(use_dask): """ Test that the endianness checking returns something in Native form (this is only needed for non-numpy functions that worry about the endianness of their data) WARNING: Because the endianness is machine-dependent, this may fail on different architectures! This is because numpy automatically converts little-endian to native in the dtype parameter; I need a workaround for this. """ pytest.importorskip('bottleneck') big = np.array([[[1],[2]]], dtype='>f4') lil = np.array([[[1],[2]]], dtype='' assert xlil.dtype.byteorder == '=' def test_header_naxis(data_advs, use_dask): cube, data = cube_and_raw(data_advs, use_dask=use_dask) assert cube.header['NAXIS'] == 3 # NOT data.ndim == 4 assert cube.header['NAXIS1'] == data.shape[3] assert cube.header['NAXIS2'] == data.shape[2] assert cube.header['NAXIS3'] == data.shape[1] assert 'NAXIS4' not in cube.header def test_slicing(data_advs, use_dask): cube, data = cube_and_raw(data_advs, use_dask) # just to check that we're starting in the right place assert cube.shape == (2,3,4) sl = cube[:,1,:] assert sl.shape == (2,4) v = cube[1:2,:,:] assert v.shape == (1,3,4) # make sure this works. Not sure what keys to test for... v.header assert cube[:,:,:].shape == (2,3,4) assert cube[:,:].shape == (2,3,4) assert cube[:].shape == (2,3,4) assert cube[:1,:1,:1].shape == (1,1,1) @pytest.mark.parametrize(('view','naxis'), [((slice(None), 1, slice(None)), 2), ((1, slice(None), slice(None)), 2), ((slice(None), slice(None), 1), 2), ((slice(None), slice(None), slice(1)), 3), ((slice(1), slice(1), slice(1)), 3), ((slice(None, None, -1), slice(None), slice(None)), 3), ]) def test_slice_wcs(view, naxis, data_advs, use_dask): cube, data = cube_and_raw(data_advs, use_dask=use_dask) sl = cube[view] assert sl.wcs.naxis == naxis # Ensure slices work without a beam cube._beam = None sl = cube[view] assert sl.wcs.naxis == naxis def test_slice_wcs_reversal(data_advs, use_dask): cube, data = cube_and_raw(data_advs, use_dask=use_dask) view = (slice(None,None,-1), slice(None), slice(None)) rcube = cube[view] rrcube = rcube[view] np.testing.assert_array_equal(np.diff(cube.spectral_axis), -np.diff(rcube.spectral_axis)) np.testing.assert_array_equal(rrcube.spectral_axis.value, cube.spectral_axis.value) np.testing.assert_array_equal(rcube.spectral_axis.value, cube.spectral_axis.value[::-1]) np.testing.assert_array_equal(rrcube.world_extrema.value, cube.world_extrema.value) # check that the lon, lat arrays are *entirely* unchanged np.testing.assert_array_equal(rrcube.spatial_coordinate_map[0].value, cube.spatial_coordinate_map[0].value) np.testing.assert_array_equal(rrcube.spatial_coordinate_map[1].value, cube.spatial_coordinate_map[1].value) def test_spectral_slice_preserve_units(data_advs, use_dask): cube, data = cube_and_raw(data_advs, use_dask=use_dask) cube = cube.with_spectral_unit(u.km/u.s) sl = cube[:,0,0] assert cube._spectral_unit == u.km/u.s assert sl._spectral_unit == u.km/u.s assert cube.spectral_axis.unit == u.km/u.s assert sl.spectral_axis.unit == u.km/u.s def test_header_units_consistent(data_advs, use_dask): cube, data = cube_and_raw(data_advs, use_dask=use_dask) cube_ms = cube.with_spectral_unit(u.m/u.s) cube_kms = cube.with_spectral_unit(u.km/u.s) cube_Mms = cube.with_spectral_unit(u.Mm/u.s) assert cube.header['CUNIT3'] == 'km s-1' assert cube_ms.header['CUNIT3'] == 'm s-1' assert cube_kms.header['CUNIT3'] == 'km s-1' assert cube_Mms.header['CUNIT3'] == 'Mm s-1' # Wow, the tolerance here is really terrible... assert_allclose(cube_Mms.header['CDELT3'], cube.header['CDELT3']/1e3,rtol=1e-3,atol=1e-5) assert_allclose(cube.header['CDELT3'], cube_kms.header['CDELT3'],rtol=1e-2,atol=1e-5) assert_allclose(cube.header['CDELT3']*1e3, cube_ms.header['CDELT3'],rtol=1e-2,atol=1e-5) cube_freq = cube.with_spectral_unit(u.Hz) assert cube_freq.header['CUNIT3'] == 'Hz' cube_freq_GHz = cube.with_spectral_unit(u.GHz) assert cube_freq_GHz.header['CUNIT3'] == 'GHz' def test_spectral_unit_conventions(data_advs, use_dask): cube, data = cube_and_raw(data_advs, use_dask=use_dask) cube_frq = cube.with_spectral_unit(u.Hz) cube_opt = cube.with_spectral_unit(u.km/u.s, rest_value=cube_frq.spectral_axis[0], velocity_convention='optical') cube_rad = cube.with_spectral_unit(u.km/u.s, rest_value=cube_frq.spectral_axis[0], velocity_convention='radio') cube_rel = cube.with_spectral_unit(u.km/u.s, rest_value=cube_frq.spectral_axis[0], velocity_convention='relativistic') # should all be exactly 0 km/s for x in (cube_rel.spectral_axis[0], cube_rad.spectral_axis[0], cube_opt.spectral_axis[0]): np.testing.assert_almost_equal(0,x.value) assert cube_rel.spectral_axis[1] != cube_rad.spectral_axis[1] assert cube_opt.spectral_axis[1] != cube_rad.spectral_axis[1] assert cube_rel.spectral_axis[1] != cube_opt.spectral_axis[1] assert cube_rel.velocity_convention == u.doppler_relativistic assert cube_rad.velocity_convention == u.doppler_radio assert cube_opt.velocity_convention == u.doppler_optical def test_invalid_spectral_unit_conventions(data_advs, use_dask): cube, data = cube_and_raw(data_advs, use_dask=use_dask) with pytest.raises(ValueError, match=("Velocity convention must be radio, optical, " "or relativistic.")): cube.with_spectral_unit(u.km/u.s, velocity_convention='invalid velocity convention') @pytest.mark.parametrize('rest', (50, 50*u.K)) def test_invalid_rest(rest, data_advs, use_dask): cube, data = cube_and_raw(data_advs, use_dask=use_dask) with pytest.raises(ValueError, match=("Rest value must be specified as an astropy " "quantity with spectral equivalence.")): cube.with_spectral_unit(u.km/u.s, velocity_convention='radio', rest_value=rest) def test_airwave_to_wave(data_advs, use_dask): cube, data = cube_and_raw(data_advs, use_dask=use_dask) cube._wcs.wcs.ctype[2] = 'AWAV' cube._wcs.wcs.cunit[2] = 'm' cube._spectral_unit = u.m cube._wcs.wcs.cdelt[2] = 1e-7 cube._wcs.wcs.crval[2] = 5e-7 ax1 = cube.spectral_axis ax2 = cube.with_spectral_unit(u.m).spectral_axis np.testing.assert_almost_equal(spectral_axis.air_to_vac(ax1).value, ax2.value) @pytest.mark.parametrize(('func','how','axis','filename'), itertools.product(('sum','std','max','min','mean'), ('slice','cube','auto'), (0,1,2), ('data_advs', 'data_advs_nobeam'), ), indirect=['filename']) def test_twod_numpy(func, how, axis, filename, use_dask): # Check that a numpy function returns the correct result when applied along # one axis # This is partly a regression test for #211 cube, data = cube_and_raw(filename, use_dask=use_dask) cube._meta['BUNIT'] = 'K' cube._unit = u.K if use_dask: if how != 'cube': pytest.skip() else: proj = getattr(cube,func)(axis=axis) else: proj = getattr(cube,func)(axis=axis, how=how) # data has a redundant 1st axis dproj = getattr(data,func)(axis=(0,axis+1)).squeeze() assert isinstance(proj, Projection) np.testing.assert_equal(proj.value, dproj) assert cube.unit == proj.unit @pytest.mark.parametrize(('func','how','axis','filename'), itertools.product(('sum','std','max','min','mean'), ('slice','cube','auto'), ((0,1),(1,2),(0,2)), ('data_advs', 'data_advs_nobeam'), ), indirect=['filename']) def test_twod_numpy_twoaxes(func, how, axis, filename, use_dask): # Check that a numpy function returns the correct result when applied along # one axis # This is partly a regression test for #211 cube, data = cube_and_raw(filename, use_dask=use_dask) cube._meta['BUNIT'] = 'K' cube._unit = u.K with warnings.catch_warnings(record=True) as wrn: if use_dask: if how != 'cube': pytest.skip() else: spec = getattr(cube,func)(axis=axis) else: spec = getattr(cube,func)(axis=axis, how=how) if func == 'mean' and axis != (1,2): assert 'Averaging over a spatial and a spectral' in str(wrn[-1].message) # data has a redundant 1st axis dspec = getattr(data.squeeze(),func)(axis=axis) if axis == (1,2): assert isinstance(spec, OneDSpectrum) assert cube.unit == spec.unit np.testing.assert_almost_equal(spec.value, dspec) else: np.testing.assert_almost_equal(spec, dspec) def test_preserves_header_values(data_advs, use_dask): # Check that the non-WCS header parameters are preserved during projection cube, data = cube_and_raw(data_advs, use_dask=use_dask) cube._meta['BUNIT'] = 'K' cube._unit = u.K cube._header['OBJECT'] = 'TestName' if use_dask: proj = cube.sum(axis=0) else: proj = cube.sum(axis=0, how='auto') assert isinstance(proj, Projection) assert proj.header['OBJECT'] == 'TestName' assert proj.hdu.header['OBJECT'] == 'TestName' def test_preserves_header_meta_values(data_advs, use_dask): # Check that additional parameters in meta are preserved cube, data = cube_and_raw(data_advs, use_dask=use_dask) cube.meta['foo'] = 'bar' assert cube.header['FOO'] == 'bar' # check that long keywords are also preserved cube.meta['too_long_keyword'] = 'too_long_information' assert 'too_long_keyword=too_long_information' in cube.header['COMMENT'] if use_dask: proj = cube.sum(axis=0) else: proj = cube.sum(axis=0, how='auto') # Checks that the header is preserved when passed to LDOs for ldo in (proj, cube[:,0,0]): assert isinstance(ldo, LowerDimensionalObject) assert ldo.header['FOO'] == 'bar' assert ldo.hdu.header['FOO'] == 'bar' # make sure that the meta preservation works on the LDOs themselves too ldo.meta['bar'] = 'foo' assert ldo.header['BAR'] == 'foo' assert 'too_long_keyword=too_long_information' in ldo.header['COMMENT'] @pytest.mark.parametrize(('func', 'filename'), itertools.product(('sum','std','max','min','mean'), ('data_advs', 'data_advs_nobeam',), ), indirect=['filename']) def test_oned_numpy(func, filename, use_dask): # Check that a numpy function returns an appropriate spectrum cube, data = cube_and_raw(filename, use_dask=use_dask) cube._meta['BUNIT'] = 'K' cube._unit = u.K spec = getattr(cube,func)(axis=(1,2)) dspec = getattr(data,func)(axis=(2,3)).squeeze() assert isinstance(spec, (OneDSpectrum, VaryingResolutionOneDSpectrum)) # data has a redundant 1st axis np.testing.assert_equal(spec.value, dspec) assert cube.unit == spec.unit def test_oned_slice(data_advs, use_dask): # Check that a slice returns an appropriate spectrum cube, data = cube_and_raw(data_advs, use_dask=use_dask) cube._meta['BUNIT'] = 'K' cube._unit = u.K spec = cube[:,0,0] assert isinstance(spec, OneDSpectrum) # data has a redundant 1st axis np.testing.assert_equal(spec.value, data[0,:,0,0]) assert cube.unit == spec.unit assert spec.header['BUNIT'] == cube.header['BUNIT'] def test_oned_slice_beams(data_sdav_beams, use_dask): # Check that a slice returns an appropriate spectrum cube, data = cube_and_raw(data_sdav_beams, use_dask=use_dask) cube._meta['BUNIT'] = 'K' cube._unit = u.K spec = cube[:,0,0] assert isinstance(spec, VaryingResolutionOneDSpectrum) # data has a redundant 1st axis np.testing.assert_equal(spec.value, data[:,0,0,0]) assert cube.unit == spec.unit assert spec.header['BUNIT'] == cube.header['BUNIT'] assert hasattr(spec, 'beams') assert 'BMAJ' in spec.hdulist[1].data.names def test_subcube_slab_beams(data_sdav_beams, use_dask): cube, data = cube_and_raw(data_sdav_beams, use_dask=use_dask) slcube = cube[1:] assert all(slcube.hdulist[1].data['CHAN'] == np.arange(slcube.shape[0])) try: # Make sure Beams has been sliced correctly assert all(cube.beams[1:] == slcube.beams) except TypeError: # in 69eac9241220d3552c06b173944cb7cdebeb47ef, radio_beam switched to # returning a single value assert cube.beams[1:] == slcube.beams # collapsing to one dimension raywise doesn't make sense and is therefore # not supported. @pytest.mark.parametrize('how', ('auto', 'cube', 'slice')) def test_oned_collapse(how, data_advs, use_dask): # Check that an operation along the spatial dims returns an appropriate # spectrum cube, data = cube_and_raw(data_advs, use_dask=use_dask) cube._meta['BUNIT'] = 'K' cube._unit = u.K if use_dask: if how != 'cube': pytest.skip() else: spec = cube.mean(axis=(1,2)) else: spec = cube.mean(axis=(1,2), how=how) assert isinstance(spec, OneDSpectrum) # data has a redundant 1st axis np.testing.assert_equal(spec.value, data.mean(axis=(0,2,3))) assert cube.unit == spec.unit assert spec.header['BUNIT'] == cube.header['BUNIT'] def test_oned_collapse_beams(data_sdav_beams, use_dask): # Check that an operation along the spatial dims returns an appropriate # spectrum cube, data = cube_and_raw(data_sdav_beams, use_dask=use_dask) cube._meta['BUNIT'] = 'K' cube._unit = u.K spec = cube.mean(axis=(1,2)) assert isinstance(spec, VaryingResolutionOneDSpectrum) # data has a redundant 1st axis np.testing.assert_equal(spec.value, data.mean(axis=(1,2,3))) assert cube.unit == spec.unit assert spec.header['BUNIT'] == cube.header['BUNIT'] assert hasattr(spec, 'beams') assert 'BMAJ' in spec.hdulist[1].data.names def test_preserve_bunit(data_advs, use_dask): cube, data = cube_and_raw(data_advs, use_dask=use_dask) assert cube.header['BUNIT'] == 'K' hdul = fits.open(data_advs) hdu = hdul[0] hdu.header['BUNIT'] = 'Jy' cube = SpectralCube.read(hdu) assert cube.unit == u.Jy assert cube.header['BUNIT'] == 'Jy' hdul.close() def test_preserve_beam(data_advs, use_dask): cube, data = cube_and_raw(data_advs, use_dask=use_dask) beam = Beam.from_fits_header(str(data_advs)) assert cube.beam == beam def test_beam_attach_to_header(data_adv, use_dask): cube, data = cube_and_raw(data_adv, use_dask=use_dask) header = cube._header.copy() del header["BMAJ"], header["BMIN"], header["BPA"] newcube = SpectralCube(data=data, wcs=cube.wcs, header=header, beam=cube.beam) assert cube.header["BMAJ"] == newcube.header["BMAJ"] assert cube.header["BMIN"] == newcube.header["BMIN"] assert cube.header["BPA"] == newcube.header["BPA"] # Should be in meta too assert newcube.meta['beam'] == cube.beam def test_beam_custom(data_adv, use_dask): cube, data = cube_and_raw(data_adv, use_dask=use_dask) header = cube._header.copy() beam = Beam.from_fits_header(header) del header["BMAJ"], header["BMIN"], header["BPA"] newcube = SpectralCube(data=data, wcs=cube.wcs, header=header) # newcube should now not have a beam # Should raise exception try: newcube.beam except utils.NoBeamError: pass # Attach the beam newcube = newcube.with_beam(beam=beam) assert newcube.beam == cube.beam # Header should be updated assert cube.header["BMAJ"] == newcube.header["BMAJ"] assert cube.header["BMIN"] == newcube.header["BMIN"] assert cube.header["BPA"] == newcube.header["BPA"] # Should be in meta too assert newcube.meta['beam'] == cube.beam # Try changing the beam properties newbeam = Beam(beam.major * 2) newcube2 = newcube.with_beam(beam=newbeam) assert newcube2.beam == newbeam # Header should be updated assert newcube2.header["BMAJ"] == newbeam.major.value assert newcube2.header["BMIN"] == newbeam.minor.value assert newcube2.header["BPA"] == newbeam.pa.value # Should be in meta too assert newcube2.meta['beam'] == newbeam def test_cube_with_no_beam(data_adv, use_dask): cube, data = cube_and_raw(data_adv, use_dask=use_dask) header = cube._header.copy() beam = Beam.from_fits_header(header) del header["BMAJ"], header["BMIN"], header["BPA"] newcube = SpectralCube(data=data, wcs=cube.wcs, header=header) # Accessing beam raises an error try: newcube.beam except utils.NoBeamError: pass # But is still has a beam attribute assert hasattr(newcube, "_beam") # Attach the beam newcube = newcube.with_beam(beam=beam) # But now it should have an accessible beam try: newcube.beam except utils.NoBeamError as exc: raise exc def test_multibeam_custom(data_vda_beams, use_dask): cube, data = cube_and_raw(data_vda_beams, use_dask=use_dask) # Make a new set of beams that differs from the original. new_beams = Beams([1.] * cube.shape[0] * u.deg) # Attach the beam newcube = cube.with_beams(new_beams, raise_error_jybm=False) try: assert all(new_beams == newcube.beams) except TypeError: # in 69eac9241220d3552c06b173944cb7cdebeb47ef, radio_beam switched to # returning a single value assert new_beams == newcube.beams @pytest.mark.openfiles_ignore @pytest.mark.xfail(raises=ValueError, strict=True) def test_multibeam_custom_wrongshape(data_vda_beams, use_dask): cube, data = cube_and_raw(data_vda_beams, use_dask=use_dask) # Make a new set of beams that differs from the original. new_beams = Beams([1.] * cube.shape[0] * u.deg) # Attach the beam cube.with_beams(new_beams[:1], raise_error_jybm=False) @pytest.mark.openfiles_ignore @pytest.mark.xfail(raises=utils.BeamUnitsError, strict=True) def test_multibeam_jybm_error(data_vda_beams, use_dask): cube, data = cube_and_raw(data_vda_beams, use_dask=use_dask) # Make a new set of beams that differs from the original. new_beams = Beams([1.] * cube.shape[0] * u.deg) # Attach the beam newcube = cube.with_beams(new_beams, raise_error_jybm=True) def test_multibeam_slice(data_vda_beams, use_dask): cube, data = cube_and_raw(data_vda_beams, use_dask=use_dask) assert isinstance(cube, VaryingResolutionSpectralCube) np.testing.assert_almost_equal(cube.beams[0].major.value, 0.4) np.testing.assert_almost_equal(cube.beams[0].minor.value, 0.1) np.testing.assert_almost_equal(cube.beams[3].major.value, 0.4) scube = cube[:2,:,:] np.testing.assert_almost_equal(scube.beams[0].major.value, 0.4) np.testing.assert_almost_equal(scube.beams[0].minor.value, 0.1) np.testing.assert_almost_equal(scube.beams[1].major.value, 0.3) np.testing.assert_almost_equal(scube.beams[1].minor.value, 0.2) flatslice = cube[0,:,:] np.testing.assert_almost_equal(flatslice.header['BMAJ'], (0.4/3600.)) # Test returning a VRODS spec = cube[:, 0, 0] assert (cube.beams == spec.beams).all() # And make sure that Beams gets slice for part of a spectrum spec_part = cube[:1, 0, 0] assert cube.beams[0] == spec.beams[0] def test_basic_unit_conversion(data_advs, use_dask): cube, data = cube_and_raw(data_advs, use_dask=use_dask) assert cube.unit == u.K mKcube = cube.to(u.mK) np.testing.assert_almost_equal(mKcube.filled_data[:].value, (cube.filled_data[:].value * 1e3)) def test_basic_unit_conversion_beams(data_vda_beams, use_dask): cube, data = cube_and_raw(data_vda_beams, use_dask=use_dask) cube._unit = u.K # want beams, but we want to force the unit to be something non-beamy cube._meta['BUNIT'] = 'K' assert cube.unit == u.K mKcube = cube.to(u.mK) np.testing.assert_almost_equal(mKcube.filled_data[:].value, (cube.filled_data[:].value * 1e3)) bunits_list = [u.Jy / u.beam, u.K, u.Jy / u.sr, u.Jy / u.pix, u.Jy / u.arcsec**2, u.mJy / u.beam, u.mK] @pytest.mark.parametrize(('init_unit'), bunits_list) def test_unit_conversions_general(data_advs, use_dask, init_unit): cube, data = cube_and_raw(data_advs, use_dask=use_dask) cube._meta['BUNIT'] = init_unit.to_string() cube._unit = init_unit # Check all unit conversion combos: for targ_unit in bunits_list: newcube = cube.to(targ_unit) if init_unit == targ_unit: np.testing.assert_almost_equal(newcube.filled_data[:].value, cube.filled_data[:].value) else: roundtrip_cube = newcube.to(init_unit) np.testing.assert_almost_equal(roundtrip_cube.filled_data[:].value, cube.filled_data[:].value) @pytest.mark.parametrize(('init_unit'), bunits_list) def test_multibeam_unit_conversions_general(data_vda_beams, use_dask, init_unit): cube, data = cube_and_raw(data_vda_beams, use_dask=use_dask) cube._meta['BUNIT'] = init_unit.to_string() cube._unit = init_unit # Check all unit conversion combos: for targ_unit in bunits_list: newcube = cube.to(targ_unit) if init_unit == targ_unit: np.testing.assert_almost_equal(newcube.filled_data[:].value, cube.filled_data[:].value) else: roundtrip_cube = newcube.to(init_unit) np.testing.assert_almost_equal(roundtrip_cube.filled_data[:].value, cube.filled_data[:].value) def test_beam_jpix_checks_array(data_advs, use_dask): ''' Ensure round-trip consistency in our defined K -> Jy/pix conversions. ''' cube, data = cube_and_raw(data_advs, use_dask=use_dask) cube._meta['BUNIT'] = 'Jy / beam' cube._unit = u.Jy/u.beam jtok = cube.beam.jtok(cube.with_spectral_unit(u.GHz).spectral_axis) pixperbeam = cube.pixels_per_beam * u.pix cube_jypix = cube.to(u.Jy / u.pix) np.testing.assert_almost_equal(cube_jypix.filled_data[:].value, (cube.filled_data[:].value / pixperbeam).value) Kcube = cube.to(u.K) np.testing.assert_almost_equal(Kcube.filled_data[:].value, (cube_jypix.filled_data[:].value * jtok[:,None,None] * pixperbeam).value) # Round trips. roundtrip_cube = cube_jypix.to(u.Jy / u.beam) np.testing.assert_almost_equal(cube.filled_data[:].value, roundtrip_cube.filled_data[:].value) Kcube_from_jypix = cube_jypix.to(u.K) np.testing.assert_almost_equal(Kcube.filled_data[:].value, Kcube_from_jypix.filled_data[:].value) def test_multibeam_jpix_checks_array(data_vda_beams, use_dask): ''' Ensure round-trip consistency in our defined K -> Jy/pix conversions. ''' cube, data = cube_and_raw(data_vda_beams, use_dask=use_dask) cube._meta['BUNIT'] = 'Jy / beam' cube._unit = u.Jy/u.beam # NOTE: We are no longer using jtok_factors for conversions. This may need to be removed # in the future jtok = cube.jtok_factors() pixperbeam = cube.pixels_per_beam * u.pix cube_jypix = cube.to(u.Jy / u.pix) np.testing.assert_almost_equal(cube_jypix.filled_data[:].value, (cube.filled_data[:].value / pixperbeam[:, None, None]).value) Kcube = cube.to(u.K) np.testing.assert_almost_equal(Kcube.filled_data[:].value, (cube_jypix.filled_data[:].value * jtok[:,None,None] * pixperbeam[:, None, None]).value) # Round trips. roundtrip_cube = cube_jypix.to(u.Jy / u.beam) np.testing.assert_almost_equal(cube.filled_data[:].value, roundtrip_cube.filled_data[:].value) Kcube_from_jypix = cube_jypix.to(u.K) np.testing.assert_almost_equal(Kcube.filled_data[:].value, Kcube_from_jypix.filled_data[:].value) def test_beam_jtok_array(data_advs, use_dask): cube, data = cube_and_raw(data_advs, use_dask=use_dask) cube._meta['BUNIT'] = 'Jy / beam' cube._unit = u.Jy/u.beam jtok = cube.beam.jtok(cube.with_spectral_unit(u.GHz).spectral_axis) # test that the beam equivalencies are correctly automatically defined Kcube = cube.to(u.K) np.testing.assert_almost_equal(Kcube.filled_data[:].value, (cube.filled_data[:].value * jtok[:,None,None]).value) def test_multibeam_jtok_array(data_vda_beams, use_dask): cube, data = cube_and_raw(data_vda_beams, use_dask=use_dask) assert cube.meta['BUNIT'].strip() == 'Jy / beam' assert cube.unit.is_equivalent(u.Jy/u.beam) #equiv = [bm.jtok_equiv(frq) for bm, frq in zip(cube.beams, cube.with_spectral_unit(u.GHz).spectral_axis)] jtok = u.Quantity([bm.jtok(frq) for bm, frq in zip(cube.beams, cube.with_spectral_unit(u.GHz).spectral_axis)]) # don't try this, it's nonsense for the multibeam case # Kcube = cube.to(u.K, equivalencies=equiv) # np.testing.assert_almost_equal(Kcube.filled_data[:].value, # (cube.filled_data[:].value * # jtok[:,None,None]).value) # test that the beam equivalencies are correctly automatically defined Kcube = cube.to(u.K) np.testing.assert_almost_equal(Kcube.filled_data[:].value, (cube.filled_data[:].value * jtok[:,None,None]).value) def test_beam_jtok(data_advs, use_dask): # regression test for an error introduced when the previous test was solved # (the "is this an array?" test used len(x) where x could be scalar) cube, data = cube_and_raw(data_advs, use_dask=use_dask) # technically this should be jy/beam, but astropy's equivalency doesn't # handle this yet cube._meta['BUNIT'] = 'Jy' cube._unit = u.Jy equiv = cube.beam.jtok_equiv(np.median(cube.with_spectral_unit(u.GHz).spectral_axis)) jtok = cube.beam.jtok(np.median(cube.with_spectral_unit(u.GHz).spectral_axis)) Kcube = cube.to(u.K, equivalencies=equiv) np.testing.assert_almost_equal(Kcube.filled_data[:].value, (cube.filled_data[:].value * jtok).value) def test_varyres_moment(data_vda_beams, use_dask): cube, data = cube_and_raw(data_vda_beams, use_dask=use_dask) assert isinstance(cube, VaryingResolutionSpectralCube) # the beams are very different, but for this test we don't care cube.beam_threshold = 1.0 with pytest.warns(UserWarning, match="Arithmetic beam averaging is being performed"): m0 = cube.moment0() assert_quantity_allclose(m0.meta['beam'].major, 0.35*u.arcsec) def test_varyres_unitconversion_roundtrip(data_vda_beams, use_dask): cube, data = cube_and_raw(data_vda_beams, use_dask=use_dask) assert isinstance(cube, VaryingResolutionSpectralCube) assert cube.unit == u.Jy/u.beam roundtrip = cube.to(u.mJy/u.beam).to(u.Jy/u.beam) assert_quantity_allclose(cube.filled_data[:], roundtrip.filled_data[:]) # you can't straightforwardly roundtrip to Jy/beam yet # it requires a per-beam equivalency, which is why there's # a specific hack to go from Jy/beam (in each channel) -> K def test_append_beam_to_hdr(data_advs, use_dask): cube, data = cube_and_raw(data_advs, use_dask=use_dask) orig_hdr = fits.getheader(data_advs) assert cube.header['BMAJ'] == orig_hdr['BMAJ'] assert cube.header['BMIN'] == orig_hdr['BMIN'] assert cube.header['BPA'] == orig_hdr['BPA'] def test_cube_with_swapped_axes(data_vda, use_dask): """ Regression test for #208 """ cube, data = cube_and_raw(data_vda, use_dask=use_dask) # Check that masking works (this should apply a lazy mask) cube.filled_data[:] def test_jybeam_upper(data_vda_jybeam_upper, use_dask): cube, data = cube_and_raw(data_vda_jybeam_upper, use_dask=use_dask) assert cube.unit == u.Jy/u.beam assert hasattr(cube, 'beam') np.testing.assert_almost_equal(cube.beam.sr.value, (((1*u.arcsec/np.sqrt(8*np.log(2)))**2).to(u.sr)*2*np.pi).value) def test_jybeam_lower(data_vda_jybeam_lower, use_dask): cube, data = cube_and_raw(data_vda_jybeam_lower, use_dask=use_dask) assert cube.unit == u.Jy/u.beam assert hasattr(cube, 'beam') np.testing.assert_almost_equal(cube.beam.sr.value, (((1*u.arcsec/np.sqrt(8*np.log(2)))**2).to(u.sr)*2*np.pi).value) def test_jybeam_whitespace(data_vda_jybeam_whitespace, use_dask): # Regression test for #257 (https://github.com/radio-astro-tools/spectral-cube/pull/257) cube, data = cube_and_raw(data_vda_jybeam_whitespace, use_dask=use_dask) assert cube.unit == u.Jy/u.beam assert hasattr(cube, 'beam') np.testing.assert_almost_equal(cube.beam.sr.value, (((1*u.arcsec/np.sqrt(8*np.log(2)))**2).to(u.sr)*2*np.pi).value) def test_beam_proj_meta(data_advs, use_dask): cube, data = cube_and_raw(data_advs, use_dask=use_dask) moment = cube.moment0(axis=0) # regression test for #250 assert 'beam' in moment.meta assert 'BMAJ' in moment.hdu.header slc = cube[0,:,:] assert 'beam' in slc.meta proj = cube.max(axis=0) assert 'beam' in proj.meta def test_proj_meta(data_advs, use_dask): cube, data = cube_and_raw(data_advs, use_dask=use_dask) moment = cube.moment0(axis=0) assert 'BUNIT' in moment.meta assert moment.meta['BUNIT'] == 'K' slc = cube[0,:,:] assert 'BUNIT' in slc.meta assert slc.meta['BUNIT'] == 'K' proj = cube.max(axis=0) assert 'BUNIT' in proj.meta assert proj.meta['BUNIT'] == 'K' def test_pix_sign(data_advs, use_dask): cube, data = cube_and_raw(data_advs, use_dask=use_dask) s,y,x = (cube._pix_size_slice(ii) for ii in range(3)) assert s>0 assert y>0 assert x>0 cube.wcs.wcs.cdelt *= -1 s,y,x = (cube._pix_size_slice(ii) for ii in range(3)) assert s>0 assert y>0 assert x>0 cube.wcs.wcs.pc *= -1 s,y,x = (cube._pix_size_slice(ii) for ii in range(3)) assert s>0 assert y>0 assert x>0 def test_varyres_moment_logic_issue364(data_vda_beams, use_dask): """ regression test for issue364 """ cube, data = cube_and_raw(data_vda_beams, use_dask=use_dask) assert isinstance(cube, VaryingResolutionSpectralCube) # the beams are very different, but for this test we don't care cube.beam_threshold = 1.0 with pytest.warns(UserWarning, match="Arithmetic beam averaging is being performed"): # note that cube.moment(order=0) is different from cube.moment0() # because cube.moment0() calls cube.moment(order=0, axis=(whatever)), # but cube.moment doesn't necessarily have to receive the axis kwarg m0 = cube.moment(order=0) # note that this is just a sanity check; one should never use the average beam assert_quantity_allclose(m0.meta['beam'].major, 0.35*u.arcsec) @pytest.mark.skipif('not casaOK') @pytest.mark.parametrize('filename', ['data_vda_beams', 'data_vda_beams_image'], indirect=['filename']) def test_mask_bad_beams(filename, use_dask): """ Prior to #543, this tested two different scenarios of beam masking. After that, the tests got mucked up because we can no longer have minor>major in the beams. """ if 'image' in str(filename) and not use_dask: pytest.skip() cube, data = cube_and_raw(filename, use_dask=use_dask) assert isinstance(cube, base_class.MultiBeamMixinClass) # make sure all of the beams are initially good (finite) assert np.all(cube.goodbeams_mask) # make sure cropping the cube maintains the mask assert np.all(cube[:3].goodbeams_mask) # middle two beams have same area masked_cube = cube.mask_out_bad_beams(0.01, reference_beam=Beam(0.3*u.arcsec, 0.2*u.arcsec, 60*u.deg)) assert np.all(masked_cube.mask.include()[:,0,0] == [False,True,True,False]) assert np.all(masked_cube.goodbeams_mask == [False,True,True,False]) mean = masked_cube.mean(axis=0) assert np.all(mean == cube[1:3,:,:].mean(axis=0)) #doesn't test anything any more # masked_cube2 = cube.mask_out_bad_beams(0.5,) # mean2 = masked_cube2.mean(axis=0) # assert np.all(mean2 == (cube[2,:,:]+cube[1,:,:])/2) # assert np.all(masked_cube2.goodbeams_mask == [False,True,True,False]) def test_convolve_to_equal(data_vda, use_dask): cube, data = cube_and_raw(data_vda, use_dask=use_dask) convolved = cube.convolve_to(cube.beam) assert np.all(convolved.filled_data[:].value == cube.filled_data[:].value) # And one channel plane = cube[0] convolved = plane.convolve_to(cube.beam) assert np.all(convolved.value == plane.value) # Pass a kwarg to the convolution function convolved = plane.convolve_to(cube.beam, nan_treatment='fill') def test_convolve_to(data_vda_beams, use_dask): cube, data = cube_and_raw(data_vda_beams, use_dask=use_dask) convolved = cube.convolve_to(Beam(0.5*u.arcsec)) # Pass a kwarg to the convolution function convolved = cube.convolve_to(Beam(0.5*u.arcsec), nan_treatment='fill') def test_convolve_to_jybeam_onebeam(point_source_5_one_beam, use_dask): cube, data = cube_and_raw(point_source_5_one_beam, use_dask=use_dask) convolved = cube.convolve_to(Beam(10*u.arcsec)) # The peak of the point source should remain constant in Jy/beam np.testing.assert_allclose(convolved[:, 5, 5].value, cube[:, 5, 5].value, atol=1e-5, rtol=1e-5) assert cube.unit == u.Jy / u.beam def test_convolve_to_jybeam_multibeams(point_source_5_spectral_beams, use_dask): cube, data = cube_and_raw(point_source_5_spectral_beams, use_dask=use_dask) convolved = cube.convolve_to(Beam(10*u.arcsec)) # The peak of the point source should remain constant in Jy/beam np.testing.assert_allclose(convolved[:, 5, 5].value, cube[:, 5, 5].value, atol=1e-5, rtol=1e-5) assert cube.unit == u.Jy / u.beam def test_convolve_to_with_bad_beams(data_vda_beams, use_dask): cube, data = cube_and_raw(data_vda_beams, use_dask=use_dask) convolved = cube.convolve_to(Beam(0.5*u.arcsec)) # From: https://github.com/radio-astro-tools/radio-beam/pull/87 # updated exception to BeamError when the beam cannot be deconvolved. # BeamError is not new in the radio_beam package, only its use here. # Keeping the ValueError for testing against 0.2, wcs=self.wcs) # Deliberately don't use a BooleanArrayMask to check auto-conversion mask2 = np.random.random((5, 20, 30)) > 0.4 stokes_data = dict(I=SpectralCube(self.data[0], wcs=self.wcs, use_dask=use_dask), Q=SpectralCube(self.data[1], wcs=self.wcs, use_dask=use_dask), U=SpectralCube(self.data[2], wcs=self.wcs, use_dask=use_dask), V=SpectralCube(self.data[3], wcs=self.wcs, use_dask=use_dask)) cube1 = StokesSpectralCube(stokes_data, mask=mask1) cube2 = cube1.with_mask(mask2) assert_equal(cube2.mask.include(), (mask1).include() & mask2) def test_mask_invalid_component_name(self, use_dask): stokes_data = {'BANANA': SpectralCube(self.data[0], wcs=self.wcs, use_dask=use_dask)} with pytest.raises(ValueError) as exc: cube = StokesSpectralCube(stokes_data) assert exc.value.args[0] == "Invalid Stokes component: BANANA - should be one of I, Q, U, V, RR, LL, RL, LR, XX, XY, YX, YY, RX, RY, LX, LY, XR, XL, YR, YL, PP, PQ, QP, QQ, RCircular, LCircular, Linear, Ptotal, Plinear, PFtotal, PFlinear, Pangle" def test_mask_invalid_shape(self, use_dask): stokes_data = dict(I=SpectralCube(self.data[0], wcs=self.wcs, use_dask=use_dask), Q=SpectralCube(self.data[1], wcs=self.wcs, use_dask=use_dask), U=SpectralCube(self.data[2], wcs=self.wcs, use_dask=use_dask), V=SpectralCube(self.data[3], wcs=self.wcs, use_dask=use_dask)) mask1 = BooleanArrayMask(np.random.random((5, 20, 15)) > 0.2, wcs=self.wcs) with pytest.raises(ValueError) as exc: cube1 = StokesSpectralCube(stokes_data, mask=mask1) assert exc.value.args[0] == "Mask shape is not broadcastable to data shape: (5, 20, 15) vs (5, 20, 30)" def test_separate_mask(self, use_dask): with NumpyRNGContext(12345): mask1 = BooleanArrayMask(np.random.random((5, 20, 30)) > 0.2, wcs=self.wcs) mask2 = [BooleanArrayMask(np.random.random((5, 20, 30)) > 0.4, wcs=self.wcs) for i in range(4)] mask3 = BooleanArrayMask(np.random.random((5, 20, 30)) > 0.2, wcs=self.wcs) stokes_data = dict(I=SpectralCube(self.data[0], wcs=self.wcs, mask=mask2[0], use_dask=use_dask), Q=SpectralCube(self.data[1], wcs=self.wcs, mask=mask2[1], use_dask=use_dask), U=SpectralCube(self.data[2], wcs=self.wcs, mask=mask2[2], use_dask=use_dask), V=SpectralCube(self.data[3], wcs=self.wcs, mask=mask2[3], use_dask=use_dask)) cube1 = StokesSpectralCube(stokes_data, mask=mask1) assert_equal(cube1.I.mask.include(), (mask1 & mask2[0]).include()) assert_equal(cube1.Q.mask.include(), (mask1 & mask2[1]).include()) assert_equal(cube1.U.mask.include(), (mask1 & mask2[2]).include()) assert_equal(cube1.V.mask.include(), (mask1 & mask2[3]).include()) cube2 = cube1.I.with_mask(mask3) assert_equal(cube2.mask.include(), (mask1 & mask2[0] & mask3).include()) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1633017852.0 spectral-cube-0.6.0/spectral_cube/tests/test_subcubes.py0000644000175100001710000001667600000000000023027 0ustar00runnerdockerfrom __future__ import print_function, absolute_import, division import pytest from distutils.version import LooseVersion from astropy import units as u from astropy import wcs import numpy as np from . import path from .helpers import assert_allclose, assert_array_equal from .test_spectral_cube import cube_and_raw from ..spectral_axis import doppler_gamma, doppler_beta, doppler_z, get_rest_value_from_wcs try: import regions regionsOK = True REGIONS_GT_03 = LooseVersion(regions.__version__) >= LooseVersion('0.3') except ImportError: regionsOK = REGIONS_GT_03 = False try: import scipy scipyOK = True except ImportError: scipyOK = False def test_subcube(data_advs, use_dask): cube, data = cube_and_raw(data_advs, use_dask=use_dask) sc1x = cube.subcube(xlo=1, xhi=3) sc2x = cube.subcube(xlo=24.06269*u.deg, xhi=24.06206*u.deg) sc2b = cube.subcube(xlo=24.06206*u.deg, xhi=24.06269*u.deg) # Mixed should be equivalent to above sc3x = cube.subcube(xlo=24.06269*u.deg, xhi=3) sc4x = cube.subcube(xlo=1, xhi=24.06206*u.deg) assert sc1x.shape == (2,3,2) assert sc2x.shape == (2,3,2) assert sc2b.shape == (2,3,2) assert sc3x.shape == (2,3,2) assert sc4x.shape == (2,3,2) assert sc1x.wcs.wcs.compare(sc2x.wcs.wcs) assert sc1x.wcs.wcs.compare(sc2b.wcs.wcs) assert sc1x.wcs.wcs.compare(sc3x.wcs.wcs) assert sc1x.wcs.wcs.compare(sc4x.wcs.wcs) sc1y = cube.subcube(ylo=1, yhi=3) sc2y = cube.subcube(ylo=29.93464 * u.deg, yhi=29.93522 * u.deg) sc3y = cube.subcube(ylo=1, yhi=29.93522 * u.deg) sc4y = cube.subcube(ylo=29.93464 * u.deg, yhi=3) assert sc1y.shape == (2, 2, 4) assert sc2y.shape == (2, 2, 4) assert sc3y.shape == (2, 2, 4) assert sc4y.shape == (2, 2, 4) assert sc1y.wcs.wcs.compare(sc2y.wcs.wcs) assert sc1y.wcs.wcs.compare(sc3y.wcs.wcs) assert sc1y.wcs.wcs.compare(sc4y.wcs.wcs) # Test mixed slicing in both spatial directions sc1xy = cube.subcube(xlo=1, xhi=3, ylo=1, yhi=3) sc2xy = cube.subcube(xlo=24.06269*u.deg, xhi=3, ylo=1,yhi=29.93522 * u.deg) sc3xy = cube.subcube(xlo=1, xhi=24.06206*u.deg, ylo=29.93464 * u.deg, yhi=3) assert sc1xy.shape == (2, 2, 2) assert sc2xy.shape == (2, 2, 2) assert sc3xy.shape == (2, 2, 2) assert sc1xy.wcs.wcs.compare(sc2xy.wcs.wcs) assert sc1xy.wcs.wcs.compare(sc3xy.wcs.wcs) sc1z = cube.subcube(zlo=1, zhi=2) sc2z = cube.subcube(zlo=-320*u.km/u.s, zhi=-319*u.km/u.s) sc3z = cube.subcube(zlo=1, zhi=-319 * u.km / u.s) sc4z = cube.subcube(zlo=-320*u.km/u.s, zhi=2) assert sc1z.shape == (1, 3, 4) assert sc2z.shape == (1, 3, 4) assert sc3z.shape == (1, 3, 4) assert sc4z.shape == (1, 3, 4) assert sc1z.wcs.wcs.compare(sc2z.wcs.wcs) assert sc1z.wcs.wcs.compare(sc3z.wcs.wcs) assert sc1z.wcs.wcs.compare(sc4z.wcs.wcs) sc5 = cube.subcube() assert sc5.shape == cube.shape assert sc5.wcs.wcs.compare(cube.wcs.wcs) assert np.all(sc5._data == cube._data) @pytest.mark.skipif('not scipyOK', reason='Could not import scipy') @pytest.mark.skipif('not regionsOK', reason='Could not import regions') @pytest.mark.skipif('not REGIONS_GT_03', reason='regions version should be >= 0.3') @pytest.mark.parametrize('regfile', ('255-fk5.reg', '255-pixel.reg'), ) def test_ds9region_255(regfile, data_255, use_dask): # specific test for correctness cube, data = cube_and_raw(data_255, use_dask=use_dask) shapelist = regions.read_ds9(path(regfile)) subcube = cube.subcube_from_regions(shapelist) assert_array_equal(subcube[0, :, :].value, np.array([11, 12, 16, 17]).reshape((2, 2))) @pytest.mark.skipif('not scipyOK', reason='Could not import scipy') @pytest.mark.skipif('not regionsOK', reason='Could not import regions') @pytest.mark.skipif('not REGIONS_GT_03', reason='regions version should be >= 0.3') @pytest.mark.parametrize(('regfile', 'result'), (('fk5.reg', (slice(None), 1, 1)), ('fk5_twoboxes.reg', (slice(None), 1, 1)), ('image.reg', (slice(None), 1, slice(None))), ( 'partial_overlap_image.reg', (slice(None), 1, 1)), ('no_overlap_image.reg', ValueError), ('partial_overlap_fk5.reg', (slice(None), 1, 1)), ('no_overlap_fk5.reg', ValueError), )) def test_ds9region_new(regfile, result, data_adv, use_dask): cube, data = cube_and_raw(data_adv, use_dask=use_dask) regionlist = regions.read_ds9(path(regfile)) if isinstance(result, type) and issubclass(result, Exception): with pytest.raises(result): sc = cube.subcube_from_regions(regionlist) else: sc = cube.subcube_from_regions(regionlist) scsum = sc.sum() dsum = data[result].sum() assert_allclose(scsum, dsum) #region = 'fk5\ncircle(29.9346557, 24.0623827, 0.11111)' #subcube = cube.subcube_from_ds9region(region) # THIS TEST FAILS! # I think the coordinate transformation in ds9 is wrong; # it uses kapteyn? #region = 'circle(2,2,2)' #subcube = cube.subcube_from_ds9region(region) @pytest.mark.skipif('not scipyOK', reason='Could not import scipy') @pytest.mark.skipif('not regionsOK', reason='Could not import regions') @pytest.mark.skipif('not REGIONS_GT_03', reason='regions version should be >= 0.3') def test_regions_spectral(data_adv, use_dask): cube, data = cube_and_raw(data_adv, use_dask=use_dask) rf_cube = get_rest_value_from_wcs(cube.wcs).to("GHz", equivalencies=u.spectral()) # content of image.reg regpix = regions.RectanglePixelRegion(regions.PixCoord(0.5, 1), width=4, height=2) # Velocity range in doppler_optical same as that of the cube. vel_range_optical = u.Quantity([-318 * u.km/u.s, -320 * u.km/u.s]) regpix.meta['range'] = list(vel_range_optical) sc1 = cube.subcube_from_regions([regpix]) scsum1 = sc1.sum() freq_range = vel_range_optical.to("GHz", equivalencies=u.doppler_optical(rf_cube)) regpix.meta['range'] = list(freq_range) sc2 = cube.subcube_from_regions([regpix]) scsum2 = sc2.sum() regpix.meta['restfreq'] = rf_cube vel_range_gamma = freq_range.to("km/s", equivalencies=doppler_gamma(rf_cube)) regpix.meta['range'] = list(vel_range_gamma) regpix.meta['veltype'] = 'GAMMA' sc3 = cube.subcube_from_regions([regpix]) scsum3 = sc3.sum() vel_range_beta = freq_range.to("km/s", equivalencies=doppler_beta(rf_cube)) regpix.meta['range'] = list(vel_range_beta) regpix.meta['veltype'] = 'BETA' sc4 = cube.subcube_from_regions([regpix]) scsum4 = sc4.sum() vel_range_z = freq_range.to("km/s", equivalencies=doppler_z(rf_cube)) regpix.meta['range'] = list(vel_range_z) regpix.meta['veltype'] = 'Z' sc5 = cube.subcube_from_regions([regpix]) scsum5 = sc5.sum() dsum = data[1:-1, 1, :].sum() assert_allclose(scsum1, dsum) # Proves that the vel/freq conversion works assert_allclose(scsum1, scsum2) assert_allclose(scsum2, scsum3) assert_allclose(scsum3, scsum4) assert_allclose(scsum4, scsum5) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1633017852.0 spectral-cube-0.6.0/spectral_cube/tests/test_visualization.py0000644000175100001710000000353500000000000024103 0ustar00runnerdockerfrom __future__ import print_function, absolute_import, division import pytest from distutils.version import LooseVersion from .test_spectral_cube import cube_and_raw @pytest.mark.openfiles_ignore def test_projvis(data_vda_jybeam_lower, use_dask): pytest.importorskip('matplotlib') cube, data = cube_and_raw(data_vda_jybeam_lower, use_dask=use_dask) mom0 = cube.moment0() mom0.quicklook(use_aplpy=False) def test_proj_imshow(data_vda_jybeam_lower, use_dask): plt = pytest.importorskip('matplotlib.pyplot') cube, data = cube_and_raw(data_vda_jybeam_lower, use_dask=use_dask) mom0 = cube.moment0() if LooseVersion(plt.matplotlib.__version__) < LooseVersion('2.1'): # imshow is now only compatible with more recent versions of matplotlib # (apparently 2.0.2 was still incompatible) plt.imshow(mom0.value) else: plt.imshow(mom0) @pytest.mark.openfiles_ignore def test_projvis_aplpy(tmp_path, data_vda_jybeam_lower, use_dask): pytest.importorskip('aplpy') cube, data = cube_and_raw(data_vda_jybeam_lower, use_dask=use_dask) mom0 = cube.moment0() mom0.quicklook(use_aplpy=True, filename=tmp_path / 'test.png') def test_mask_quicklook(data_vda_jybeam_lower, use_dask): pytest.importorskip('aplpy') cube, data = cube_and_raw(data_vda_jybeam_lower, use_dask=use_dask) cube.mask.quicklook(view=(0, slice(None), slice(None)), use_aplpy=True) @pytest.mark.openfiles_ignore def test_to_glue(data_vda_jybeam_lower, use_dask): pytest.importorskip('glue') cube, data = cube_and_raw(data_vda_jybeam_lower, use_dask=use_dask) cube.to_glue(start_gui=False) @pytest.mark.openfiles_ignore def test_to_pvextractor(data_vda_jybeam_lower, use_dask): pytest.importorskip('pvextractor') cube, data = cube_and_raw(data_vda_jybeam_lower, use_dask=use_dask) cube.to_pvextractor() ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1633017852.0 spectral-cube-0.6.0/spectral_cube/tests/test_wcs_utils.py0000644000175100001710000001636500000000000023223 0ustar00runnerdockerfrom __future__ import print_function, absolute_import, division import pytest import warnings from astropy.io import fits import numpy as np from ..wcs_utils import (WCS, drop_axis, wcs_swapaxes, add_stokes_axis_to_wcs, axis_names, slice_wcs, check_equality, strip_wcs_from_header) from . import path def test_wcs_dropping(): wcs = WCS(naxis=4) wcs.wcs.pc = np.zeros([4, 4]) np.fill_diagonal(wcs.wcs.pc, np.arange(1, 5)) pc = wcs.wcs.pc # for later use below dropped = drop_axis(wcs, 0) assert np.all(dropped.wcs.get_pc().diagonal() == np.array([2, 3, 4])) dropped = drop_axis(wcs, 1) assert np.all(dropped.wcs.get_pc().diagonal() == np.array([1, 3, 4])) dropped = drop_axis(wcs, 2) assert np.all(dropped.wcs.get_pc().diagonal() == np.array([1, 2, 4])) dropped = drop_axis(wcs, 3) assert np.all(dropped.wcs.get_pc().diagonal() == np.array([1, 2, 3])) wcs = WCS(naxis=4) wcs.wcs.cd = pc dropped = drop_axis(wcs, 0) assert np.all(dropped.wcs.get_pc().diagonal() == np.array([2, 3, 4])) dropped = drop_axis(wcs, 1) assert np.all(dropped.wcs.get_pc().diagonal() == np.array([1, 3, 4])) dropped = drop_axis(wcs, 2) assert np.all(dropped.wcs.get_pc().diagonal() == np.array([1, 2, 4])) dropped = drop_axis(wcs, 3) assert np.all(dropped.wcs.get_pc().diagonal() == np.array([1, 2, 3])) def test_wcs_swapping(): wcs = WCS(naxis=4) wcs.wcs.pc = np.zeros([4, 4]) np.fill_diagonal(wcs.wcs.pc, np.arange(1, 5)) pc = wcs.wcs.pc # for later use below swapped = wcs_swapaxes(wcs, 0, 1) assert np.all(swapped.wcs.get_pc().diagonal() == np.array([2, 1, 3, 4])) swapped = wcs_swapaxes(wcs, 0, 3) assert np.all(swapped.wcs.get_pc().diagonal() == np.array([4, 2, 3, 1])) swapped = wcs_swapaxes(wcs, 2, 3) assert np.all(swapped.wcs.get_pc().diagonal() == np.array([1, 2, 4, 3])) wcs = WCS(naxis=4) wcs.wcs.cd = pc swapped = wcs_swapaxes(wcs, 0, 1) assert np.all(swapped.wcs.get_pc().diagonal() == np.array([2, 1, 3, 4])) swapped = wcs_swapaxes(wcs, 0, 3) assert np.all(swapped.wcs.get_pc().diagonal() == np.array([4, 2, 3, 1])) swapped = wcs_swapaxes(wcs, 2, 3) assert np.all(swapped.wcs.get_pc().diagonal() == np.array([1, 2, 4, 3])) def test_add_stokes(): wcs = WCS(naxis=3) for ii in range(4): outwcs = add_stokes_axis_to_wcs(wcs, ii) assert outwcs.wcs.naxis == 4 def test_axis_names(data_adv, data_vad): wcs = WCS(str(data_adv)) assert axis_names(wcs) == ['RA', 'DEC', 'VOPT'] wcs = WCS(str(data_vad)) assert axis_names(wcs) == ['VOPT', 'RA', 'DEC'] def test_wcs_slice(): wcs = WCS(naxis=3) wcs.wcs.crpix = [50., 45., 30.] wcs_new = slice_wcs(wcs, (slice(10,20), slice(None), slice(20,30))) np.testing.assert_allclose(wcs_new.wcs.crpix, [30., 45., 20.]) def test_wcs_slice_reversal(): wcs = WCS(naxis=3) wcs.wcs.crpix = [50., 45., 30.] wcs.wcs.crval = [0., 0., 0.] wcs.wcs.cdelt = [1., 1., 1.] wcs_new = slice_wcs(wcs, (slice(None, None, -1), slice(None), slice(None)), shape=[100., 150., 200.]) spaxis = wcs.sub([0]).wcs_pix2world(np.arange(100), 0) new_spaxis = wcs_new.sub([0]).wcs_pix2world(np.arange(100), 0) np.testing.assert_allclose(spaxis, new_spaxis[::-1]) def test_reversal_roundtrip(): wcs = WCS(naxis=3) wcs.wcs.crpix = [50., 45., 30.] wcs.wcs.crval = [0., 0., 0.] wcs.wcs.cdelt = [1., 1., 1.] wcs_new = slice_wcs(wcs, (slice(None, None, -1), slice(None), slice(None)), shape=[100., 150., 200.]) spaxis = wcs.sub([0]).wcs_pix2world(np.arange(100), 0) new_spaxis = wcs_new.sub([0]).wcs_pix2world(np.arange(100), 0) np.testing.assert_allclose(spaxis, new_spaxis[::-1]) re_reverse = slice_wcs(wcs_new, (slice(None, None, -1), slice(None), slice(None)), shape=[100., 150., 200.]) new_spaxis = re_reverse.sub([0]).wcs_pix2world(np.arange(100), 0) np.testing.assert_allclose(spaxis, new_spaxis[::-1]) #These are NOT equal, but they are equivalent: CRVAL and CRPIX are shifted #by an acceptable amount # assert check_equality(wcs, re_reverse) re_re_reverse = slice_wcs(re_reverse, (slice(None, None, -1), slice(None), slice(None)), shape=[100., 150., 200.]) re_re_re_reverse = slice_wcs(re_re_reverse, (slice(None, None, -1), slice(None), slice(None)), shape=[100., 150., 200.]) assert check_equality(re_re_re_reverse, re_reverse) def test_wcs_comparison(): wcs1 = WCS(naxis=3) wcs1.wcs.crpix = np.array([50., 45., 30.], dtype='float32') wcs2 = WCS(naxis=3) wcs2.wcs.crpix = np.array([50., 45., 30.], dtype='float64') wcs3 = WCS(naxis=3) wcs3.wcs.crpix = np.array([50., 45., 31.], dtype='float64') wcs4 = WCS(naxis=3) wcs4.wcs.crpix = np.array([50., 45., 30.0001], dtype='float64') assert check_equality(wcs1,wcs2) assert not check_equality(wcs1,wcs3) assert check_equality(wcs1, wcs3, wcs_tolerance=1.0e1) assert not check_equality(wcs1,wcs4) assert check_equality(wcs1, wcs4, wcs_tolerance=1e-3) @pytest.mark.parametrize('fn', ('cubewcs1.hdr', 'cubewcs2.hdr')) def test_strip_wcs(fn): header1 = fits.Header.fromtextfile(path(fn)) header1_stripped = strip_wcs_from_header(header1) with open(path(fn),'r') as fh: hdrlines = fh.readlines() newfn = fn.replace('.hdr', '_blanks.hdr') hdrlines.insert(-20,"\n") hdrlines.insert(-1,"\n") with open(path(newfn),'w') as fh: fh.writelines(hdrlines) header2 = fits.Header.fromtextfile(path(newfn)) header2_stripped = strip_wcs_from_header(header2) assert header1_stripped == header2_stripped def test_wcs_slice_unmatched_celestial(): wcs = WCS(naxis=3) wcs.wcs.ctype = ['RA---TAN', 'DEC--TAN', 'FREQ'] wcs.wcs.crpix = [50., 45., 30.] # drop RA with warnings.catch_warnings(record=True) as wrn: wcs_new = drop_axis(wcs, 0) assert 'is being removed' in str(wrn[-1].message) # drop Dec with warnings.catch_warnings(record=True) as wrn: wcs_new = drop_axis(wcs, 1) assert 'is being removed' in str(wrn[-1].message) with warnings.catch_warnings(record=True) as wrn: wcs_new = slice_wcs(wcs, (slice(10,20), 0, slice(20,30)), drop_degenerate=True) assert 'is being removed' in str(wrn[-1].message) def test_wcs_downsampling(): """ Regression tests for #525 These are a series of simple tests I verified with pen and paper, but it's always worth checking me again. """ wcs = WCS(naxis=1) wcs.wcs.ctype = ['FREQ',] wcs.wcs.crpix = [1.,] nwcs = slice_wcs(wcs, slice(0, None, 1)) assert nwcs.wcs.crpix[0] == 1 nwcs = slice_wcs(wcs, slice(0, None, 2)) assert nwcs.wcs.crpix[0] == 0.75 nwcs = slice_wcs(wcs, slice(0, None, 4)) assert nwcs.wcs.crpix[0] == 0.625 nwcs = slice_wcs(wcs, slice(2, None, 1)) assert nwcs.wcs.crpix[0] == -1 nwcs = slice_wcs(wcs, slice(2, None, 2)) assert nwcs.wcs.crpix[0] == -0.25 nwcs = slice_wcs(wcs, slice(2, None, 4)) assert nwcs.wcs.crpix[0] == 0.125 ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1633017852.0 spectral-cube-0.6.0/spectral_cube/tests/utilities.py0000644000175100001710000000700100000000000022146 0ustar00runnerdocker ''' Utilities for tests. ''' from six.moves import zip import numpy as np import astropy.units as u from astropy.io import fits from astropy.utils import NumpyRNGContext from ..spectral_cube import SpectralCube def generate_header(pixel_scale, spec_scale, beamfwhm, imshape, v0): header = {'CDELT1': -(pixel_scale).to(u.deg).value, 'CDELT2': (pixel_scale).to(u.deg).value, 'BMAJ': beamfwhm.to(u.deg).value, 'BMIN': beamfwhm.to(u.deg).value, 'BPA': 0.0, 'CRPIX1': imshape[0] / 2., 'CRPIX2': imshape[1] / 2., 'CRVAL1': 0.0, 'CRVAL2': 0.0, 'CTYPE1': 'GLON-CAR', 'CTYPE2': 'GLAT-CAR', 'CUNIT1': 'deg', 'CUNIT2': 'deg', 'CRVAL3': v0, 'CUNIT3': spec_scale.unit.to_string(), 'CDELT3': spec_scale.value, 'CRPIX3': 1, 'CTYPE3': 'VRAD', 'BUNIT': 'K', } return fits.Header(header) def generate_hdu(data, pixel_scale, spec_scale, beamfwhm, v0): imshape = data.shape[1:] header = generate_header(pixel_scale, spec_scale, beamfwhm, imshape, v0) return fits.PrimaryHDU(data, header) def gaussian(x, amp, mean, sigma): return amp * np.exp(- (x - mean)**2 / (2 * sigma**2)) def generate_gaussian_cube(shape=(100, 25, 25), sigma=8., amp=1., noise=None, spec_scale=1 * u.km / u.s, pixel_scale=1 * u.arcsec, beamfwhm=3 * u.arcsec, v0=None, vel_surface=None, seed=247825498, use_dask=None): ''' Generate a SpectralCube with Gaussian profiles. The peak velocity positions can be given with `vel_surface`. Otherwise, the peaks of the profiles are randomly assigned positions in the cubes. This is primarily to test shuffling and stacking of spectra, rather than trying to be being physically meaningful. Returns ------- spec_cube : SpectralCube The generated cube. mean_positions : array The peak positions in the cube. ''' test_cube = np.empty(shape) mean_positions = np.empty(shape[1:]) spec_middle = int(shape[0] / 2) spec_quarter = int(shape[0] / 4) if v0 is None: v0 = 0 with NumpyRNGContext(seed): spec_inds = np.mgrid[-spec_middle:spec_middle] * spec_scale.value if len(spec_inds) == 0: spec_inds = np.array([0]) spat_inds = np.indices(shape[1:]) for y, x in zip(spat_inds[0].flatten(), spat_inds[1].flatten()): # Lock the mean to within 25% from the centre if vel_surface is not None: mean_pos = vel_surface[y,x] else: mean_pos = \ np.random.uniform(low=spec_inds[spec_quarter], high=spec_inds[spec_quarter + spec_middle]) test_cube[:, y, x] = gaussian(spec_inds, amp, mean_pos, sigma) mean_positions[y, x] = mean_pos + v0 if noise is not None: test_cube[:, y, x] += np.random.normal(0, noise, shape[0]) test_hdu = generate_hdu(test_cube, pixel_scale, spec_scale, beamfwhm, spec_inds[0] + v0) spec_cube = SpectralCube.read(test_hdu, use_dask=use_dask) mean_positions = mean_positions * spec_scale.unit return spec_cube, mean_positions ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1633017852.0 spectral-cube-0.6.0/spectral_cube/utils.py0000644000175100001710000000624000000000000020135 0ustar00runnerdockerimport warnings from functools import wraps from astropy.utils.exceptions import AstropyUserWarning bigdataurl = "https://spectral-cube.readthedocs.io/en/latest/big_data.html" def cached(func): """ Decorator to cache function calls """ @wraps(func) def wrapper(self, *args): # The cache lives in the instance so that it gets garbage collected if (func, args) not in self._cache: self._cache[(func, args)] = func(self, *args) return self._cache[(func, args)] wrapper.wrapped_function = func return wrapper def warn_slow(function): @wraps(function) def wrapper(self, *args, **kwargs): # if the function accepts a 'how', the 'cube' approach requires the whole cube in memory warn_how = (kwargs.get('how') == 'cube') or 'how' not in kwargs if self._is_huge and not self.allow_huge_operations and warn_how: raise ValueError("This function ({0}) requires loading the entire " "cube into memory, and the cube is large ({1} " "pixels), so by default we disable this operation. " "To enable the operation, set " "`cube.allow_huge_operations=True` and try again. " "Alternatively, you may want to consider using an " "approach that does not load the whole cube into " "memory by specifying how='slice' or how='ray'. " "See {bigdataurl} for details." .format(str(function), self.size, bigdataurl=bigdataurl)) elif warn_how and not self._is_huge: # TODO: add check for whether cube has been loaded into memory warnings.warn("This function ({0}) requires loading the entire cube into " "memory and may therefore be slow.".format(str(function)), PossiblySlowWarning ) return function(self, *args, **kwargs) return wrapper class SpectralCubeWarning(AstropyUserWarning): pass class UnsupportedIterationStrategyWarning(SpectralCubeWarning): pass class VarianceWarning(SpectralCubeWarning): pass class SliceWarning(SpectralCubeWarning): pass class BeamAverageWarning(SpectralCubeWarning): pass class BeamWarning(SpectralCubeWarning): pass class WCSCelestialError(Exception): pass class WCSMismatchWarning(SpectralCubeWarning): pass class NotImplementedWarning(SpectralCubeWarning): pass class StokesWarning(SpectralCubeWarning): pass class ExperimentalImplementationWarning(SpectralCubeWarning): pass class PossiblySlowWarning(SpectralCubeWarning): pass class SmoothingWarning(SpectralCubeWarning): pass class NonFiniteBeamsWarning(SpectralCubeWarning): pass class WCSWarning(SpectralCubeWarning): pass class FITSWarning(SpectralCubeWarning): pass class BadVelocitiesWarning(SpectralCubeWarning): pass class FITSReadError(Exception): pass class NoBeamError(Exception): pass class BeamUnitsError(Exception): pass ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1633017864.0 spectral-cube-0.6.0/spectral_cube/version.py0000644000175100001710000000021600000000000020457 0ustar00runnerdocker# coding: utf-8 # file generated by setuptools_scm # don't change, don't track in version control version = '0.6.0' version_tuple = (0, 6, 0) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1633017852.0 spectral-cube-0.6.0/spectral_cube/visualization-tools.py0000644000175100001710000001217300000000000023036 0ustar00runnerdockerimport os import numpy as np from astropy.utils.console import ProgressBar def check_ffmpeg(ffmpeg_cmd): returncode = os.system(f'{ffmpeg_cmd} > /dev/null 2&> /dev/null') if returncode not in (0, 256): raise OSError(f"{ffmpeg_cmd} not found in the executable path." f" Return code {returncode}") def make_rgb_movie(cube, prefix, v1, v2, vmin, vmax, ffmpeg_cmd='ffmpeg'): """ Make a velocity movie with red, green, and blue corresponding to neighboring velocity channels Parameters ---------- cube : SpectralCube The cube to visualize prefix : str A prefix to prepend to the output png and movie. For example, it could be rgb/sourcename_speciesname v1 : Quantity A value in spectral-axis equivalent units specifying the starting velocity / frequency / wavelength v2 : Quantity A value in spectral-axis equivalent units specifying the ending velocity / frequency / wavelength vmin : float Minimum value to display vmax : float Maximum value to display ffmpeg_cmd : str The system command for ffmpeg. Required to make a movie """ import aplpy check_ffmpeg(ffmpeg_cmd) # Create the WCS template F = aplpy.FITSFigure(cube[0].hdu) F.show_grayscale() # determine pixel range p1 = cube.closest_spectral_channel(v1) p2 = cube.closest_spectral_channel(v2) for jj, ii in enumerate(ProgressBar(range(p1, p2-1))): rgb = np.array([cube[ii+2], cube[ii+1], cube[ii]]).T.swapaxes(0, 1) # in case you manually set min/max rgb[rgb > vmax] = 1 rgb[rgb < vmin] = 0 # this is the unsupported little bit... F._ax1.clear() F._ax1.imshow((rgb-vmin)/(vmax-vmin), extent=F._extent) v1_ = int(np.round(cube.spectral_axis[ii].value)) v2_ = int(np.round(cube.spectral_axis[ii+2].value)) # then write out the files F.save('{2}_v{0}to{1}.png'.format(v1_, v2_, prefix)) # make a sorted version for use with ffmpeg if os.path.exists('{prefix}{0:04d}.png'.format(jj, prefix=prefix)): os.remove('{prefix}{0:04d}.png'.format(jj, prefix=prefix)) os.link('{2}_v{0}to{1}.png'.format(v1_, v2_, prefix), '{prefix}{0:04d}.png'.format(jj, prefix=prefix)) os.system('{ffmpeg} -r 10 -y -i {prefix}%04d.png -c:v ' 'libx264 -pix_fmt yuv420p -vf ' '"scale=1024:768" -r 10' # "scale=1024:768,setpts=10*PTS" ' {prefix}_RGB_movie.mp4'.format(prefix=prefix, ffmpeg=ffmpeg_cmd)) def make_multispecies_rgb(cube_r, cube_g, cube_b, prefix, v1, v2, vmin, vmax, ffmpeg_cmd='ffmpeg'): """ Parameters ---------- cube_r, cube_g, cube_b : SpectralCube The three cubes to visualize. They should have identical spatial and spectral dimensions. prefix : str A prefix to prepend to the output png and movie. For example, it could be rgb/sourcename_speciesname v1 : Quantity A value in spectral-axis equivalent units specifying the starting velocity / frequency / wavelength v2 : Quantity A value in spectral-axis equivalent units specifying the ending velocity / frequency / wavelength vmin : float Minimum value to display (constant for all 3 colors) vmax : float Maximum value to display (constant for all 3 colors) ffmpeg_cmd : str The system command for ffmpeg. Required to make a movie """ import aplpy check_ffmpeg(ffmpeg_cmd) # Create the WCS template F = aplpy.FITSFigure(cube_r[0].hdu) F.show_grayscale() assert cube_r.shape == cube_b.shape assert cube_g.shape == cube_b.shape # determine pixel range p1 = cube_r.closest_spectral_channel(v1) p2 = cube_r.closest_spectral_channel(v2) for jj, ii in enumerate(ProgressBar(range(p1, p2+1))): rgb = np.array([cube_r[ii].value, cube_g[ii].value, cube_b[ii].value ]).T.swapaxes(0, 1) # in case you manually set min/max rgb[rgb > vmax] = 1 rgb[rgb < vmin] = 0 # this is the unsupported little bit... F._ax1.clear() F._ax1.imshow((rgb-vmin)/(vmax-vmin), extent=F._extent) v1_ = int(np.round(cube_r.spectral_axis[ii].value)) # then write out the files F.refresh() F.save('{1}_v{0}.png'.format(v1_, prefix)) # make a sorted version for use with ffmpeg if os.path.exists('{prefix}{0:04d}.png'.format(jj, prefix=prefix)): os.remove('{prefix}{0:04d}.png'.format(jj, prefix=prefix)) os.link('{1}_v{0}.png'.format(v1_, prefix), '{prefix}{0:04d}.png'.format(jj, prefix=prefix)) os.system('{ffmpeg} -r 10 -y -i {prefix}%04d.png' ' -c:v libx264 -pix_fmt yuv420p -vf ' '"scale=1024:768" -r 10' # "scale=1024:768,setpts=10*PTS" ' {prefix}_RGB_movie.mp4'.format(prefix=prefix, ffmpeg=ffmpeg_cmd)) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1633017852.0 spectral-cube-0.6.0/spectral_cube/wcs_utils.py0000644000175100001710000006007700000000000021021 0ustar00runnerdockerfrom __future__ import print_function, absolute_import, division import numpy as np from astropy.wcs import WCS import warnings from astropy import units as u from astropy import log from astropy.wcs import InconsistentAxisTypesError from .utils import WCSWarning wcs_parameters_to_preserve = ['cel_offset', 'dateavg', 'dateobs', 'equinox', 'latpole', 'lonpole', 'mjdavg', 'mjdobs', 'name', 'obsgeo', 'phi0', 'radesys', 'restfrq', 'restwav', 'specsys', 'ssysobs', 'ssyssrc', 'theta0', 'velangl', 'velosys', 'zsource'] wcs_projections = {"AZP", "SZP", "TAN", "STG", "SIN", "ARC", "ZPN", "ZEA", "AIR", "CYP", "CEA", "CAR", "MER", "COP", "COE", "COD", "COO", "SFL", "PAR", "MOL", "AIT", "BON", "PCO", "TSC", "CSC", "QSC", "HPX", "XPH"} # not writable: # 'lat', 'lng', 'lattyp', 'lngtyp', bad_spectypes_mapping = {'VELOCITY':'VELO', 'WAVELENG':'WAVE', } def drop_axis(wcs, dropax): """ Drop the ax on axis dropax Remove an axis from the WCS Parameters ---------- wcs: astropy.wcs.WCS The WCS with naxis to be chopped to naxis-1 dropax: int The index of the WCS to drop, counting from 0 (i.e., python convention, not FITS convention) """ inds = list(range(wcs.wcs.naxis)) inds.pop(dropax) inds = np.array(inds) return reindex_wcs(wcs, inds) def add_stokes_axis_to_wcs(wcs, add_before_ind): """ Add a new Stokes axis that is uncorrelated with any other axes Parameters ---------- wcs: astropy.wcs.WCS The WCS to add to add_before_ind: int Index of the WCS to insert the new Stokes axis in front of. To add at the end, do add_before_ind = wcs.wcs.naxis """ naxin = wcs.wcs.naxis naxout = naxin + 1 inds = list(range(naxout)) inds.pop(add_before_ind) inds = np.array(inds) outwcs = WCS(naxis=naxout) for par in wcs_parameters_to_preserve: setattr(outwcs.wcs, par, getattr(wcs.wcs, par)) pc = np.zeros([naxout, naxout]) pc[inds[:, np.newaxis], inds[np.newaxis, :]] = wcs.wcs.get_pc() pc[add_before_ind, add_before_ind] = 1 def append_to_posn(val, posn, lst): """ insert a value at index into a list """ return list(lst)[:posn] + [val] + list(lst)[posn:] outwcs.wcs.crpix = append_to_posn(1, add_before_ind, wcs.wcs.crpix) outwcs.wcs.cdelt = append_to_posn(1, add_before_ind, wcs.wcs.get_cdelt()) outwcs.wcs.crval = append_to_posn(1, add_before_ind, wcs.wcs.crval) outwcs.wcs.cunit = append_to_posn("", add_before_ind, wcs.wcs.cunit) outwcs.wcs.ctype = append_to_posn("STOKES", add_before_ind, wcs.wcs.ctype) outwcs.wcs.cname = append_to_posn("STOKES", add_before_ind, wcs.wcs.cname) outwcs.wcs.pc = pc return outwcs def wcs_swapaxes(wcs, ax0, ax1): """ Swap axes in a WCS Parameters ---------- wcs: astropy.wcs.WCS The WCS to have its axes swapped ax0: int ax1: int The indices of the WCS to be swapped, counting from 0 (i.e., python convention, not FITS convention) """ inds = list(range(wcs.wcs.naxis)) inds[ax0], inds[ax1] = inds[ax1], inds[ax0] inds = np.array(inds) return reindex_wcs(wcs, inds) def reindex_wcs(wcs, inds): """ Re-index a WCS given indices. The number of axes may be reduced. Parameters ---------- wcs: astropy.wcs.WCS The WCS to be manipulated inds: np.array(dtype='int') The indices of the array to keep in the output. e.g. swapaxes: [0,2,1,3] dropaxes: [0,1,3] """ if not isinstance(inds, np.ndarray): raise TypeError("Indices must be an ndarray") if inds.dtype.kind != 'i': raise TypeError('Indices must be integers') outwcs = WCS(naxis=len(inds)) for par in wcs_parameters_to_preserve: setattr(outwcs.wcs, par, getattr(wcs.wcs, par)) cdelt = wcs.wcs.get_cdelt() pc = wcs.wcs.get_pc() outwcs.wcs.crpix = wcs.wcs.crpix[inds] outwcs.wcs.cdelt = cdelt[inds] outwcs.wcs.crval = wcs.wcs.crval[inds] outwcs.wcs.cunit = [wcs.wcs.cunit[i] for i in inds] outwcs.wcs.ctype = [wcs.wcs.ctype[i] for i in inds] outwcs.wcs.cname = [wcs.wcs.cname[i] for i in inds] outwcs.wcs.pc = pc[inds[:, None], inds[None, :]] matched_projections = [prj for prj in wcs_projections if any(prj in x for x in outwcs.wcs.ctype)] matchproj_count = [sum(prj in x for x in outwcs.wcs.ctype) for prj in matched_projections] if any(n == 1 for n in matchproj_count): # unmatched celestial axes = there is only one of them for prj in matched_projections: match = [prj in ct for ct in outwcs.wcs.ctype].index(True) outwcs.wcs.ctype[match] = outwcs.wcs.ctype[match].split("-")[0] warnings.warn("Slicing across a celestial axis results " "in an invalid WCS, so the celestial " "projection ({0}) is being removed. " "The WCS indices being kept were {1}." .format(prj, inds), WCSWarning) pv_cards = [] for i, j in enumerate(inds): for k, m, v in wcs.wcs.get_pv(): if k == j: pv_cards.append((i, m, v)) outwcs.wcs.set_pv(pv_cards) ps_cards = [] for i, j in enumerate(inds): for k, m, v in wcs.wcs.get_ps(): if k == j: ps_cards.append((i, m, v)) outwcs.wcs.set_ps(ps_cards) outwcs.wcs.set() return outwcs def axis_names(wcs): """ Extract world names for each coordinate axis Parameters ---------- wcs : astropy.wcs.WCS The WCS object to extract names from Returns ------- A tuple of names along each axis """ names = list(wcs.wcs.cname) types = wcs.wcs.ctype for i in range(len(names)): if len(names[i]) > 0: continue names[i] = types[i].split('-')[0] return names def slice_wcs(mywcs, view, shape=None, numpy_order=True, drop_degenerate=False): """ Slice a WCS instance using a Numpy slice. The order of the slice should be reversed (as for the data) compared to the natural WCS order. Parameters ---------- view : tuple A tuple containing the same number of slices as the WCS system. The ``step`` method, the third argument to a slice, is not presently supported. numpy_order : bool Use numpy order, i.e. slice the WCS so that an identical slice applied to a numpy array will slice the array and WCS in the same way. If set to `False`, the WCS will be sliced in FITS order, meaning the first slice will be applied to the *last* numpy index but the *first* WCS axis. drop_degenerate : bool Drop axes that are size-1, i.e., any that have an integer index as part of their view? Otherwise, an Exception will be raised. Returns ------- wcs_new : `~astropy.wcs.WCS` A new resampled WCS axis """ if hasattr(view, '__len__') and len(view) > mywcs.wcs.naxis: raise ValueError("Must have # of slices <= # of WCS axes") elif not hasattr(view, '__len__'): # view MUST be an iterable view = [view] if not all([isinstance(x, slice) for x in view]): if drop_degenerate: keeps = [mywcs.naxis-ii for ii,ind in enumerate(view) if isinstance(ind, slice)] keeps.sort() try: mywcs = mywcs.sub(keeps) except InconsistentAxisTypesError as ex: # make a copy of the WCS because we need to modify it inplace wcscp = mywcs.deepcopy() for ct in wcscp.celestial.wcs.ctype: match = list(wcscp.wcs.ctype).index(ct) prj = wcscp.wcs.ctype[match].split("-")[-1] wcscp.wcs.ctype[match] = wcscp.wcs.ctype[match].split("-")[0] warnings.warn("Slicing across a celestial axis results " "in an invalid WCS, so the celestial " "projection ({0}) is being removed. " "The view used was {1}." .format(prj, view), WCSWarning) mywcs = wcscp.sub(keeps) view = [x for x in view if isinstance(x, slice)] else: raise ValueError("Cannot downsample a WCS with indexing. Use " "wcs.sub or wcs.dropaxis if you want to remove " "axes.") wcs_new = mywcs.deepcopy() for i, iview in enumerate(view): if iview.step is not None and iview.start is None: # Slice from "None" is equivalent to slice from 0 (but one # might want to downsample, so allow slices with # None,None,step or None,stop,step) iview = slice(0, iview.stop, iview.step) if numpy_order: wcs_index = mywcs.wcs.naxis - 1 - i else: wcs_index = i if iview.step is not None and iview.step < 0: if iview.step != -1: raise NotImplementedError("Haven't dealt with resampling & reversing.") # reverse indexing requires the use of shape if shape is None: raise ValueError("Cannot reverse-index a WCS without " "specifying a shape.") if iview.stop is not None: refpix = iview.stop else: refpix = shape[i] # this will raise an inconsistent axis type error if slicing over # celestial axes is attempted # wcs_index+1 is required because sub([0]) = sub([all]) crval = mywcs.sub([wcs_index+1]).wcs_pix2world([refpix-1], 0)[0] crpix = 1 cdelt = mywcs.wcs.cdelt[wcs_index] wcs_new.wcs.crpix[wcs_index] = crpix wcs_new.wcs.crval[wcs_index] = crval wcs_new.wcs.cdelt[wcs_index] = -cdelt elif iview.start is not None: if iview.step not in (None, 1): crpix = mywcs.wcs.crpix[wcs_index] cdelt = mywcs.wcs.cdelt[wcs_index] # the logic is very annoying: the blc of the first pixel # is at 0.5, so that value must be subtracted to get into # numpy-compatible coordinates, then added back afterward crp = ((crpix - iview.start - 0.5)/iview.step + 0.5) # SIMPLE TEST: # view(0, None, 1) w/crpix = 1 # crp = 1 # view(0, None, 2) w/crpix = 1 # crp = 0.75 # view(0, None, 4) w/crpix = 1 # crp = 0.625 # view(2, None, 1) w/crpix = 1 # crp = -1 # view(2, None, 2) w/crpix = 1 # crp = -0.25 # view(2, None, 4) w/crpix = 1 # crp = 0.125 wcs_new.wcs.crpix[wcs_index] = crp wcs_new.wcs.cdelt[wcs_index] = cdelt * iview.step else: wcs_new.wcs.crpix[wcs_index] -= iview.start # Without this, may cause a regression of #234 wcs_new.wcs.set() return wcs_new def check_equality(wcs1, wcs2, warn_missing=False, ignore_keywords=['MJD-OBS', 'VELOSYS'], wcs_tolerance=0.0): """ Check if two WCSs are equal Parameters ---------- wcs1, wcs2: `astropy.wcs.WCS` The WCSs warn_missing: bool Issue warnings if one header is missing a keyword that the other has? ignore_keywords: list of str Keywords that are stored as part of the WCS but do not define part of the coordinate system and therefore can be safely ignored. wcs_tolerance : float The decimal level to check for equality. For example, 1e-2 would have 0.001 and 0.002 as equal, but 1e-3 would have them as inequal """ # TODO: use this to replace the rest of the check_equality code #return wcs1.wcs.compare(wcs2.wcs, cmp=wcs.WCSCOMPARE_ANCILLARY, # tolerance=tolerance) #Until we've switched to the wcs.compare approach, we need to have #np.testing.assert_almost_equal work if wcs_tolerance == 0: exact = True else: exact = False # np.testing.assert_almost_equal wants an integer # e.g., for 0.0001, the integer is 4 decimal = int(np.ceil(-np.log10(wcs_tolerance))) # naive version: # return str(wcs1.to_header()) != str(wcs2.to_header()) h1 = wcs1.to_header() h2 = wcs2.to_header() # Default to headers equal; everything below changes to false if there are # any inequalities OK = True # to keep track of keywords in both matched = [] for c1 in h1.cards: key = c1[0] if key in h2: matched.append(key) c2 = h2.cards[key] # special check for units: "m/s" = "m s-1" if 'UNIT' in key: u1 = u.Unit(c1[1]) u2 = u.Unit(c2[1]) if u1 != u2: if key in ignore_keywords: log.debug("IGNORED Header 1, {0}: {1} != {2}".format(key,u1,u2)) else: OK = False log.debug("Header 1, {0}: {1} != {2}".format(key,u1,u2)) elif isinstance(c1[1], float): try: if exact: assert c1[1] == c2[1] else: np.testing.assert_almost_equal(c1[1], c2[1], decimal=decimal) except AssertionError: if key in ('RESTFRQ','RESTWAV'): warnings.warn("{0} is not equal in WCS; ignoring ".format(key)+ "under the assumption that you want to" " compare velocity cubes.", WCSWarning) continue if key in ignore_keywords: log.debug("IGNORED Header 1, {0}: {1} != {2}".format(key,c1[1],c2[1])) else: log.debug("Header 1, {0}: {1} != {2}".format(key,c1[1],c2[1])) OK = False elif c1[1] != c2[1]: if key in ignore_keywords: log.debug("IGNORED Header 1, {0}: {1} != {2}".format(key,c1[1],c2[1])) else: log.debug("Header 1, {0}: {1} != {2}".format(key,c1[1],c2[1])) OK = False else: if warn_missing: warnings.warn("WCS2 is missing card {0}".format(key), WCSWarning) elif key not in ignore_keywords: OK = False # Check that there aren't any cards in header 2 that were missing from # header 1 for c2 in h2.cards: key = c2[0] if key not in matched: if warn_missing: warnings.warn("WCS1 is missing card {0}".format(key), WCSWarning) else: OK = False return OK def strip_wcs_from_header(header): """ Given a header with WCS information, remove ALL WCS information from that header """ hwcs = WCS(header) wcsh = hwcs.to_header() keys_to_keep = [k for k in header if (k and k not in wcsh and 'NAXIS' not in k)] newheader = header.copy() # Strip blanks first. They appear to cause serious problems, like not # deleting things they should! if '' in newheader: del newheader[''] for kw in list(newheader.keys()): if kw not in keys_to_keep: del newheader[kw] for kw in ('CRPIX{ii}', 'CRVAL{ii}', 'CDELT{ii}', 'CUNIT{ii}', 'CTYPE{ii}', 'PC0{ii}_0{jj}', 'CD{ii}_{jj}', 'CROTA{ii}', 'PC{ii}_{jj}', 'PC{ii:03d}{jj:03d}', 'PV0{ii}_0{jj}', 'PV{ii}_{jj}'): for ii in range(5): for jj in range(5): k = kw.format(ii=ii,jj=jj) if k in newheader.keys(): del newheader[k] return newheader def diagonal_wcs_to_cdelt(mywcs): """ If a WCS has only diagonal pixel scale matrix elements (which are composed from cdelt*pc), use them to reform the wcs as a CDELT-style wcs with no pc or cd elements """ offdiag = ~np.eye(mywcs.pixel_scale_matrix.shape[0], dtype='bool') if not any(mywcs.pixel_scale_matrix[offdiag]): cdelt = mywcs.pixel_scale_matrix.diagonal() del mywcs.wcs.pc del mywcs.wcs.cd mywcs.wcs.cdelt = cdelt return mywcs def is_pixel_axis_to_wcs_correlated(mywcs, axis): """ Check if the chosen pixel axis correlates to other WCS axes. This tests whether the pixel axis is correlated only to 1 WCS axis and can be considered independent of the others. """ axis_corr_matrix = mywcs.axis_correlation_matrix # Map from numpy axis to WCS axis wcs_axis = mywcs.world_n_dim - (axis + 1) # Grab the row along the given spatial axis. This slice is along the WCS axes wcs_axis_correlations = axis_corr_matrix[:, wcs_axis] # The image axis should always be correlated to at least 1 WCS axis. # i.e., the diagonal term is one in the matrix. Correlations with other axes will give # a sum > 1 if wcs_axis_correlations.sum() > 1: return True return False def find_spatial_pixel_index(cube, xlo, xhi, ylo, yhi): ''' Given low and high cuts, return the pixel coordinates for a rectangular region in the given cube or spatial projection. lo and hi inputs can be given in pixels, "min"/"max", or in world coordinates. When spatial WCS dimensions are given as an `~astropy.units.Quantity`, the spatial coordinates of the 'lo' and 'hi' corners are solved together. This minimizes WCS variations due to the sky curvature when slicing from a large (>1 deg) image. Parameters ---------- cube : :class:`~SpectralCube` or spatial :class:`~Projection` A spectral-cube or projection/slice with spatial dimensions. [xy]lo/[xy]hi : int or :class:`~astropy.units.Quantity` or ``min``/``max`` The endpoints to extract. If given as a ``Quantity``, will be interpreted as World coordinates. If given as a ``string`` or ``int``, will be interpreted as pixel coordinates. Returns ------- limit_dict : dict Pixel coordinates of [xy]lo/[xy]hi in the given ``cube``. ''' ndim = cube.ndim for val in (xlo,ylo,xhi,yhi): if hasattr(val, 'unit') and not val.unit.is_equivalent(u.degree): raise u.UnitsError("The X and Y slices must be specified in " "degree-equivalent units.") limit_dict = {} # Match corners. If one uses a WCS coord, set 'min'/'max' # To the lat or long extrema. # We only care about matching spatial corners. xlo_unit = hasattr(xlo, 'unit') ylo_unit = hasattr(ylo, 'unit') # Do min/max switching if the WCS grid increases/decreases # with the pixel grid. ymin = min if cube.wcs.wcs.cdelt[1] > 0 else max xmin = min if cube.wcs.wcs.cdelt[0] > 0 else max ymax = max if cube.wcs.wcs.cdelt[1] > 0 else min xmax = max if cube.wcs.wcs.cdelt[0] > 0 else min if not any([xlo_unit, ylo_unit]): limit_dict['xlo'] = 0 if xlo == 'min' else xlo limit_dict['ylo'] = 0 if ylo == 'min' else ylo else: if xlo_unit: limit_dict['xlo'] = xlo limit_dict['ylo'] = ymin(cube.latitude_extrema) if ylo == 'min' else ylo if ylo_unit: limit_dict['ylo'] = ylo limit_dict['xlo'] = xmin(cube.longitude_extrema) if xlo == 'min' else xlo xhi_unit = hasattr(xhi, 'unit') yhi_unit = hasattr(yhi, 'unit') if not any([xhi_unit, yhi_unit]): # For 3D cube if ndim == 3: limit_dict['xhi'] = cube.shape[2] if xhi == 'max' else xhi limit_dict['yhi'] = cube.shape[1] if yhi == 'max' else yhi # For 2D spatial projection/slice else: limit_dict['xhi'] = cube.shape[1] if xhi == 'max' else xhi limit_dict['yhi'] = cube.shape[0] if yhi == 'max' else yhi else: if xhi_unit: limit_dict['xhi'] = xhi limit_dict['yhi'] = ymax(cube.latitude_extrema) if yhi == 'max' else yhi if yhi_unit: limit_dict['yhi'] = yhi limit_dict['xhi'] = xmax(cube.longitude_extrema) if xhi == 'max' else xhi # list to track which entries had units united = [] # Solve the spatial axes together. # There's 3 options: # (1) If both pixel units, do nothing # (2) If both WCS units, use world_to_array_index_values # (3) If mixed, minimize the distance between the spatial position grids # for the cube to find the closest spatial pixel. for corn in ['lo', 'hi']: grids = {} # Check if either were given as a WCS value with a unit x_hasunit = hasattr(limit_dict['x'+corn], 'unit') y_hasunit = hasattr(limit_dict['y'+corn], 'unit') # print(limit_dict['x'+corn], limit_dict['y'+corn]) # print(x_hasunit, y_hasunit) # (1) If both pixel units, we keep in pixel units. if not any([x_hasunit, y_hasunit]): continue # (2) If both WCS units, use world_to_array_index_values elif all([x_hasunit, y_hasunit]): corn_arr = np.array([limit_dict['x'+corn].value, limit_dict['y'+corn].value]) xmin, ymin = cube.wcs.celestial.world_to_array_index_values(corn_arr.reshape((1, 2)))[0] limit_dict['y' + corn] = ymin limit_dict['x' + corn] = xmin if corn == 'hi': united.append('y' + corn) united.append('x' + corn) # (3) If mixed, minimize the distance between the spatial position grids # for the cube to find the closest spatial pixel, limited to the 1 pixel # value that is given. else: # We change the dimensions being sliced depending on whether the # x or y dim is given in pixel units. # This allows for a 1D minimization instead of needing both spatial axes. if x_hasunit: pixval = limit_dict['y' + corn] lim = 'x' + corn slicedim = 0 else: pixval = limit_dict['x' + corn] lim = 'y' + corn slicedim = 1 if corn == 'lo': slice_pixdim = slice(pixval, pixval+1) else: slice_pixdim = slice(pixval-1, pixval) limval = limit_dict[lim] if hasattr(limval, 'unit'): united.append(lim) sl = [slice(None)] sl.insert(slicedim, slice_pixdim) if ndim == 3: sl.insert(0, slice(0, 1)) sl = tuple(sl) if slicedim == 0: spine = cube.world[sl][2 if ndim == 3 else 1] else: spine = cube.world[sl][1 if ndim == 3 else 0] val = np.argmin(np.abs(limval-spine)) if limval > spine.max() or limval < spine.min(): log.warning("The limit {0} is out of bounds." " Using min/max instead.".format(lim)) limit_dict[lim] = val # Correct ordering (this shouldn't be necessary but do a quick check) for xx in 'yx': hi,lo = limit_dict[xx+'hi'], limit_dict[xx+'lo'] if hi < lo: # must have high > low limit_dict[xx+'hi'], limit_dict[xx+'lo'] = lo, hi if xx+'hi' in united: # End-inclusive indexing: need to add one for the high slice # Only do this for converted values, not for pixel values # (i.e., if the xlo/ylo/zlo value had units) limit_dict[xx+'hi'] += 1 return limit_dict ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1633017852.0 spectral-cube-0.6.0/spectral_cube/ytcube.py0000644000175100001710000002564200000000000020277 0ustar00runnerdockerfrom __future__ import print_function, absolute_import, division import six import os import subprocess import numpy as np import time from astropy.utils.console import ProgressBar from astropy import log import warnings __all__ = ['ytCube'] class ytCube(object): """ Light wrapper of a yt object with ability to translate yt<->wcs coordinates """ def __init__(self, cube, dataset, spectral_factor=1.0): self.cube = cube self.wcs = cube.wcs self.dataset = dataset self.spectral_factor = spectral_factor def world2yt(self, world_coord, first_index=0): """ Convert a position in world coordinates to the coordinates used by a yt dataset that has been generated using the ``to_yt`` method. Parameters ---------- world_coord: `astropy.wcs.WCS.wcs_world2pix`-valid input The world coordinates first_index: 0 or 1 The first index of the data. In python and yt, this should be zero, but for the FITS coordinates, use 1 """ yt_coord = self.wcs.wcs_world2pix([world_coord], first_index)[0] yt_coord[2] = (yt_coord[2] - 0.5)*self.spectral_factor+0.5 return yt_coord def yt2world(self, yt_coord, first_index=0): """ Convert a position in yt's coordinates to world coordinates from a yt dataset that has been generated using the ``to_yt`` method. Parameters ---------- world_coord: `astropy.wcs.WCS.wcs_pix2world`-valid input The yt pixel coordinates to convert back to world coordinates first_index: 0 or 1 The first index of the data. In python and yt, this should be zero, but for the FITS coordinates, use 1 """ yt_coord = np.array(yt_coord) # stripping off units yt_coord[2] = (yt_coord[2] - 0.5)/self.spectral_factor+0.5 world_coord = self.wcs.wcs_pix2world([yt_coord], first_index)[0] return world_coord def quick_render_movie(self, outdir, size=256, nframes=30, camera_angle=(0,0,1), north_vector=(0,0,1), rot_vector=(1,0,0), colormap='doom', cmap_range='auto', transfer_function='auto', start_index=0, image_prefix="", output_filename='out.mp4', log_scale=False, rescale=True): """ Create a movie rotating the cube 360 degrees from PP -> PV -> PP -> PV -> PP Parameters ---------- outdir: str The output directory in which the individual image frames and the resulting output mp4 file should be stored size: int The size of the individual output frame in pixels (i.e., size=256 will result in a 256x256 image) nframes: int The number of frames in the resulting movie camera_angle: 3-tuple The initial angle of the camera north_vector: 3-tuple The vector of 'north' in the data cube. Default is coincident with the spectral axis rot_vector: 3-tuple The vector around which the camera will be rotated colormap: str A valid colormap. See `yt.show_colormaps` transfer_function: 'auto' or `yt.visualization.volume_rendering.TransferFunction` Either 'auto' to use the colormap specified, or a valid TransferFunction instance log_scale: bool Should the colormap be log scaled? rescale: bool If True, the images will be rescaled to have a common 95th percentile brightness, which can help reduce flickering from having a single bright pixel in some projections start_index : int The number of the first image to save image_prefix : str A string to prepend to the image name for each image that is output output_filename : str The movie file name to output. The suffix may affect the file type created. Defaults to 'out.mp4'. Will be placed in ``outdir`` Returns ------- """ try: import yt except ImportError: raise ImportError("yt could not be imported. Cube renderings are not possible.") scale = np.max(self.cube.shape) if not os.path.exists(outdir): os.makedirs(outdir) elif not os.path.isdir(outdir): raise OSError("Output directory {0} exists and is not a directory.".format(outdir)) if cmap_range == 'auto': upper = self.cube.max().value lower = self.cube.std().value * 3 cmap_range = [lower,upper] if transfer_function == 'auto': tfh = self.auto_transfer_function(cmap_range, log=log_scale) tfh.tf.map_to_colormap(cmap_range[0], cmap_range[1], colormap=colormap) tf = tfh.tf else: tf = transfer_function center = self.dataset.domain_center cam = self.dataset.h.camera(center, camera_angle, scale, size, tf, north_vector=north_vector, fields='flux') im = cam.snapshot() images = [im] pb = ProgressBar(nframes) for ii,im in enumerate(cam.rotation(2 * np.pi, nframes, rot_vector=rot_vector)): images.append(im) im.write_png(os.path.join(outdir,"%s%04i.png" % (image_prefix, ii+start_index)), rescale=False) pb.update(ii+1) log.info("Rendering complete in {0}s".format(time.time() - pb._start_time)) if rescale: _rescale_images(images, os.path.join(outdir, image_prefix)) pipe = _make_movie(outdir, prefix=image_prefix, filename=output_filename) return images def auto_transfer_function(self, cmap_range, log=False, colormap='doom', **kwargs): from yt.visualization.volume_rendering.transfer_function_helper import TransferFunctionHelper tfh = TransferFunctionHelper(self.dataset) tfh.set_field('flux') tfh.set_bounds(bounds=cmap_range) tfh.set_log(log) tfh.build_transfer_function() return tfh def quick_isocontour(self, level='3 sigma', title='', description='', color_map='hot', color_log=False, export_to='sketchfab', filename=None, **kwargs): """ Export isocontours to sketchfab Requires that you have an account on https://sketchfab.com and are logged in Parameters ---------- level: str or float The level of the isocontours to create. Can be specified as n-sigma with strings like '3.3 sigma' or '2 sigma' (there must be a space between the number and the word) title: str A title for the uploaded figure description: str A short description for the uploaded figure color_map: str Any valid colormap. See `yt.show_colormaps` color_log: bool Whether the colormap should be log scaled. With the default parameters, this has no effect. export_to: 'sketchfab', 'obj', 'ply' You can export to sketchfab, to a .obj file (and accompanying .mtl file), or a .ply file. The latter two require ``filename`` specification filename: None or str Optional - prefix for output filenames if ``export_to`` is 'obj', or the full filename when ``export_to`` is 'ply'. Ignored for 'sketchfab' kwargs: dict Keyword arguments are passed to the appropriate yt function Returns ------- The result of the `yt.surface.export_sketchfab` function """ if isinstance(level, six.string_types): sigma = self.cube.std().value level = float(level.split()[0]) * sigma self.dataset.periodicity = (True,True,True) surface = self.dataset.surface(self.dataset.all_data(), "flux", level) if export_to == 'sketchfab': if filename is not None: warnings.warn("sketchfab export does not expect a filename entry") return surface.export_sketchfab(title=title, description=description, color_map=color_map, color_log=color_log, **kwargs) elif export_to == 'obj': if filename is None: raise ValueError("If export_to is not 'sketchfab'," " a filename must be specified") surface.export_obj(filename, color_field='ones', color_map=color_map, color_log=color_log, **kwargs) elif export_to == 'ply': if filename is None: raise ValueError("If export_to is not 'sketchfab'," " a filename must be specified") surface.export_ply(filename, color_field='ones', color_map=color_map, color_log=color_log, **kwargs) else: raise ValueError("export_to must be one of sketchfab,obj,ply") def _rescale_images(images, prefix): """ Save a sequence of images, at a common scaling Reduces flickering """ cmax = max(np.percentile(i[:, :, :3].sum(axis=2), 99.5) for i in images) amax = max(np.percentile(i[:, :, 3], 95) for i in images) for i, image in enumerate(images): image = image.rescale(cmax=cmax, amax=amax).swapaxes(0,1) image.write_png("%s%04i.png" % (prefix, i), rescale=False) def _make_movie(moviepath, prefix="", filename='out.mp4', overwrite=True): """ Use ffmpeg to generate a movie from the image series """ outpath = os.path.join(moviepath, filename) if os.path.exists(outpath) and overwrite: command = ['ffmpeg', '-y', '-r','5','-i', os.path.join(moviepath,prefix+'%04d.png'), '-r','30','-pix_fmt', 'yuv420p', outpath] elif os.path.exists(outpath): log.info("File {0} exists - skipping".format(outpath)) else: command = ['ffmpeg', '-r', '5', '-i', os.path.join(moviepath,prefix+'%04d.png'), '-r','30','-pix_fmt', 'yuv420p', outpath] pipe = subprocess.Popen(command, stdout=subprocess.PIPE, close_fds=True) pipe.wait() return pipe ././@PaxHeader0000000000000000000000000000003200000000000010210 xustar0026 mtime=1633017864.73303 spectral-cube-0.6.0/spectral_cube.egg-info/0000755000175100001710000000000000000000000020113 5ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1633017864.0 spectral-cube-0.6.0/spectral_cube.egg-info/PKG-INFO0000644000175100001710000000416100000000000021212 0ustar00runnerdockerMetadata-Version: 2.1 Name: spectral-cube Version: 0.6.0 Summary: A package for interaction with spectral cubes Home-page: http://spectral-cube.readthedocs.org Author: Adam Ginsburg, Tom Robitaille, Chris Beaumont, Adam Leroy, Erik Rosolowsky, and Eric Koch Author-email: adam.g.ginsburg@gmail.com License: BSD Platform: UNKNOWN Provides-Extra: test Provides-Extra: docs Provides-Extra: novis Provides-Extra: all License-File: LICENSE.rst About ===== |Join the chat at https://gitter.im/radio-astro-tools/spectral-cube| This package aims to facilitate the reading, writing, manipulation, and analysis of spectral data cubes. More information is available in the documentation, avaliable `online at readthedocs.org `__. .. figure:: http://img.shields.io/badge/powered%20by-AstroPy-orange.svg?style=flat :alt: Powered by Astropy Badge Powered by Astropy Badge Credits ======= This package is developed by: - Chris Beaumont (`@ChrisBeaumont `__) - Adam Ginsburg (`@keflavich `__) - Adam Leroy (`@akleroy `__) - Thomas Robitaille (`@astrofrog `__) - Erik Rosolowsky (`@low-sky `__) - Eric Koch (`@e-koch `__) Build and coverage status ========================= |Build Status| |Coverage Status| |DOI| .. |Join the chat at https://gitter.im/radio-astro-tools/spectral-cube| image:: https://badges.gitter.im/Join%20Chat.svg :target: https://gitter.im/radio-astro-tools/spectral-cube?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge&utm_content=badge .. |Build Status| image:: https://travis-ci.org/radio-astro-tools/spectral-cube.png?branch=master :target: https://travis-ci.org/radio-astro-tools/spectral-cube .. |Coverage Status| image:: https://coveralls.io/repos/radio-astro-tools/spectral-cube/badge.svg?branch=master :target: https://coveralls.io/r/radio-astro-tools/spectral-cube?branch=master .. |DOI| image:: https://zenodo.org/badge/doi/10.5281/zenodo.11485.svg :target: http://dx.doi.org/10.5281/zenodo.11485 ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1633017864.0 spectral-cube-0.6.0/spectral_cube.egg-info/SOURCES.txt0000644000175100001710000001230700000000000022002 0ustar00runnerdocker.gitignore .readthedocs.yml CHANGES.rst LICENSE.rst MANIFEST.in README.rst conftest.py pyproject.toml setup.cfg setup.py tox.ini .github/workflows/main.yml .github/workflows/publish.yml docs/Makefile docs/accessing.rst docs/api.rst docs/arithmetic.rst docs/beam_handling.rst docs/big_data.rst docs/conf.py docs/continuum_subtraction.rst docs/creating_reading.rst docs/dask.rst docs/errors.rst docs/examples.rst docs/index.rst docs/installing.rst docs/manipulating.rst docs/masking.rst docs/metadata.rst docs/moments.rst docs/nitpick-exceptions docs/quick_looks.rst docs/rtd-pip-requirements docs/smoothing.rst docs/spectral_extraction.rst docs/stokes.rst docs/visualization.rst docs/writing.rst docs/yt_example.rst docs/_templates/autosummary/base.rst docs/_templates/autosummary/class.rst docs/_templates/autosummary/module.rst spectral_cube/__init__.py spectral_cube/_astropy_init.py spectral_cube/_moments.py spectral_cube/analysis_utilities.py spectral_cube/base_class.py spectral_cube/conftest.py spectral_cube/cube_utils.py spectral_cube/dask_spectral_cube.py spectral_cube/lower_dimensional_structures.py spectral_cube/masks.py spectral_cube/np_compat.py spectral_cube/spectral_axis.py spectral_cube/spectral_cube.py spectral_cube/stokes_spectral_cube.py spectral_cube/utils.py spectral_cube/version.py spectral_cube/visualization-tools.py spectral_cube/wcs_utils.py spectral_cube/ytcube.py spectral_cube.egg-info/PKG-INFO spectral_cube.egg-info/SOURCES.txt spectral_cube.egg-info/dependency_links.txt spectral_cube.egg-info/not-zip-safe spectral_cube.egg-info/requires.txt spectral_cube.egg-info/top_level.txt spectral_cube/io/__init__.py spectral_cube/io/casa_image.py spectral_cube/io/casa_masks.py spectral_cube/io/class_lmv.py spectral_cube/io/core.py spectral_cube/io/fits.py spectral_cube/tests/__init__.py spectral_cube/tests/helpers.py spectral_cube/tests/setup_package.py spectral_cube/tests/test_analysis_functions.py spectral_cube/tests/test_casafuncs.py spectral_cube/tests/test_cube_utils.py spectral_cube/tests/test_dask.py spectral_cube/tests/test_io.py spectral_cube/tests/test_masks.py spectral_cube/tests/test_moments.py spectral_cube/tests/test_performance.py spectral_cube/tests/test_projection.py spectral_cube/tests/test_regrid.py spectral_cube/tests/test_spectral_axis.py spectral_cube/tests/test_spectral_cube.py spectral_cube/tests/test_stokes_spectral_cube.py spectral_cube/tests/test_subcubes.py spectral_cube/tests/test_visualization.py spectral_cube/tests/test_wcs_utils.py spectral_cube/tests/utilities.py spectral_cube/tests/data/255-fk5.reg spectral_cube/tests/data/255-pixel.reg spectral_cube/tests/data/Makefile spectral_cube/tests/data/__init__.py spectral_cube/tests/data/cubewcs1.hdr spectral_cube/tests/data/cubewcs2.hdr spectral_cube/tests/data/example_cube.fits spectral_cube/tests/data/example_cube.lmv spectral_cube/tests/data/fk5.reg spectral_cube/tests/data/fk5_twoboxes.reg spectral_cube/tests/data/greisen2006.hdr spectral_cube/tests/data/header_jybeam.hdr spectral_cube/tests/data/image.reg spectral_cube/tests/data/no_overlap_fk5.reg spectral_cube/tests/data/no_overlap_image.reg spectral_cube/tests/data/partial_overlap_fk5.reg spectral_cube/tests/data/partial_overlap_image.reg spectral_cube/tests/data/basic.image/table.dat spectral_cube/tests/data/basic.image/table.f0 spectral_cube/tests/data/basic.image/table.f0_TSM0 spectral_cube/tests/data/basic.image/table.info spectral_cube/tests/data/basic.image/table.lock spectral_cube/tests/data/basic.image/logtable/table.dat spectral_cube/tests/data/basic.image/logtable/table.f0 spectral_cube/tests/data/basic.image/logtable/table.info spectral_cube/tests/data/basic.image/logtable/table.lock spectral_cube/tests/data/basic.image/mask0/table.dat spectral_cube/tests/data/basic.image/mask0/table.f0 spectral_cube/tests/data/basic.image/mask0/table.f0_TSM0 spectral_cube/tests/data/basic.image/mask0/table.info spectral_cube/tests/data/basic.image/mask0/table.lock spectral_cube/tests/data/basic_bigendian.image/table.dat spectral_cube/tests/data/basic_bigendian.image/table.f0 spectral_cube/tests/data/basic_bigendian.image/table.f0_TSM0 spectral_cube/tests/data/basic_bigendian.image/table.info spectral_cube/tests/data/basic_bigendian.image/table.lock spectral_cube/tests/data/basic_bigendian.image/logtable/table.dat spectral_cube/tests/data/basic_bigendian.image/logtable/table.f0 spectral_cube/tests/data/basic_bigendian.image/logtable/table.info spectral_cube/tests/data/basic_bigendian.image/logtable/table.lock spectral_cube/tests/data/basic_bigendian.image/mask0/table.dat spectral_cube/tests/data/basic_bigendian.image/mask0/table.f0 spectral_cube/tests/data/basic_bigendian.image/mask0/table.f0_TSM0 spectral_cube/tests/data/basic_bigendian.image/mask0/table.info spectral_cube/tests/data/basic_bigendian.image/mask0/table.lock spectral_cube/tests/data/nomask.image/table.dat spectral_cube/tests/data/nomask.image/table.f0 spectral_cube/tests/data/nomask.image/table.f0_TSM0 spectral_cube/tests/data/nomask.image/table.info spectral_cube/tests/data/nomask.image/table.lock spectral_cube/tests/data/nomask.image/logtable/table.dat spectral_cube/tests/data/nomask.image/logtable/table.f0 spectral_cube/tests/data/nomask.image/logtable/table.info spectral_cube/tests/data/nomask.image/logtable/table.lock././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1633017864.0 spectral-cube-0.6.0/spectral_cube.egg-info/dependency_links.txt0000644000175100001710000000000100000000000024161 0ustar00runnerdocker ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1633017864.0 spectral-cube-0.6.0/spectral_cube.egg-info/not-zip-safe0000644000175100001710000000000100000000000022341 0ustar00runnerdocker ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1633017864.0 spectral-cube-0.6.0/spectral_cube.egg-info/requires.txt0000644000175100001710000000053200000000000022513 0ustar00runnerdockerastropy numpy>=1.8.0 radio_beam>=0.3.3 six dask[array] joblib casa-formats-io [all] zarr fsspec distributed aplpy glue-core[qt] matplotlib pvextractor regions reproject scipy [all:python_version < "3.8"] yt [docs] sphinx-astropy matplotlib [novis] zarr fsspec distributed pvextractor regions reproject scipy [test] pytest-astropy pytest-cov ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1633017864.0 spectral-cube-0.6.0/spectral_cube.egg-info/top_level.txt0000644000175100001710000000001600000000000022642 0ustar00runnerdockerspectral_cube ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1633017852.0 spectral-cube-0.6.0/tox.ini0000644000175100001710000000251500000000000015124 0ustar00runnerdocker[tox] envlist = py{36,37,38,39}-test{,-all,-dev,-novis,-cov} build_docs codestyle requires = setuptools >= 30.3.0 pip >= 19.3.1 isolated_build = true indexserver = NRAO = https://casa-pip.nrao.edu/repository/pypi-group/simple [testenv] passenv = HOME DISPLAY LC_ALL LC_CTYPE ON_TRAVIS changedir = .tmp/{envname} description = run tests with pytest deps = dev: git+https://github.com/radio-astro-tools/pvextractor#egg=pvextractor dev: git+https://github.com/radio-astro-tools/radio-beam#egg=radio-beam dev: git+https://github.com/astropy/astropy#egg=astropy dev: git+https://github.com/astropy/reproject#egg=reproject casa: :NRAO:casatools casa: :NRAO:casatasks extras = test all: all commands = pip freeze !cov: pytest --open-files --pyargs spectral_cube {toxinidir}/docs {posargs} cov: pytest --open-files --pyargs spectral_cube {toxinidir}/docs --cov spectral_cube --cov-config={toxinidir}/setup.cfg {posargs} cov: coverage xml -o {toxinidir}/coverage.xml [testenv:build_docs] changedir = docs description = invoke sphinx-build to build the HTML docs extras = docs commands = sphinx-build -W -b html . _build/html {posargs} [testenv:codestyle] deps = flake8 skip_install = true commands = flake8 --max-line-length=100 spectral_cube